2023-07-11 15:33:29,357 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53 2023-07-11 15:33:29,377 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-11 15:33:29,397 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-11 15:33:29,398 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec, deleteOnExit=true 2023-07-11 15:33:29,398 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-11 15:33:29,399 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/test.cache.data in system properties and HBase conf 2023-07-11 15:33:29,400 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.tmp.dir in system properties and HBase conf 2023-07-11 15:33:29,400 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir in system properties and HBase conf 2023-07-11 15:33:29,401 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-11 15:33:29,401 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-11 15:33:29,401 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-11 15:33:29,512 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-11 15:33:29,941 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-11 15:33:29,946 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-11 15:33:29,946 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-11 15:33:29,947 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-11 15:33:29,947 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 15:33:29,947 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-11 15:33:29,947 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-11 15:33:29,948 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 15:33:29,948 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 15:33:29,948 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-11 15:33:29,949 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/nfs.dump.dir in system properties and HBase conf 2023-07-11 15:33:29,949 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir in system properties and HBase conf 2023-07-11 15:33:29,950 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 15:33:29,950 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-11 15:33:29,950 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-11 15:33:30,513 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 15:33:30,519 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 15:33:30,847 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-11 15:33:31,046 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-11 15:33:31,065 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:33:31,109 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:33:31,148 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir/Jetty_localhost_36529_hdfs____.6qz599/webapp 2023-07-11 15:33:31,299 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36529 2023-07-11 15:33:31,344 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 15:33:31,344 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 15:33:31,863 WARN [Listener at localhost/43853] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:33:31,963 WARN [Listener at localhost/43853] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 15:33:31,985 WARN [Listener at localhost/43853] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:33:31,993 INFO [Listener at localhost/43853] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:33:31,998 INFO [Listener at localhost/43853] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir/Jetty_localhost_34587_datanode____.j7iffq/webapp 2023-07-11 15:33:32,143 INFO [Listener at localhost/43853] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34587 2023-07-11 15:33:32,663 WARN [Listener at localhost/46559] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:33:32,681 WARN [Listener at localhost/46559] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 15:33:32,686 WARN [Listener at localhost/46559] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:33:32,688 INFO [Listener at localhost/46559] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:33:32,696 INFO [Listener at localhost/46559] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir/Jetty_localhost_36163_datanode____.sntcmm/webapp 2023-07-11 15:33:32,816 INFO [Listener at localhost/46559] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36163 2023-07-11 15:33:32,849 WARN [Listener at localhost/40373] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:33:32,885 WARN [Listener at localhost/40373] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 15:33:32,887 WARN [Listener at localhost/40373] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:33:32,889 INFO [Listener at localhost/40373] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:33:32,899 INFO [Listener at localhost/40373] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir/Jetty_localhost_44813_datanode____dudhvl/webapp 2023-07-11 15:33:33,034 INFO [Listener at localhost/40373] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44813 2023-07-11 15:33:33,049 WARN [Listener at localhost/45661] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:33:33,202 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xba6ab7416da976a2: Processing first storage report for DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b from datanode 8e797f1a-c0f5-4c95-bc13-30e2cd63813a 2023-07-11 15:33:33,204 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xba6ab7416da976a2: from storage DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b node DatanodeRegistration(127.0.0.1:34437, datanodeUuid=8e797f1a-c0f5-4c95-bc13-30e2cd63813a, infoPort=45433, infoSecurePort=0, ipcPort=46559, storageInfo=lv=-57;cid=testClusterID;nsid=1700349507;c=1689089610610), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-11 15:33:33,204 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c2b3782b2acdbd6: Processing first storage report for DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc from datanode c471a55d-9d41-4630-9b0d-cc9662e3bd64 2023-07-11 15:33:33,204 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c2b3782b2acdbd6: from storage DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc node DatanodeRegistration(127.0.0.1:45239, datanodeUuid=c471a55d-9d41-4630-9b0d-cc9662e3bd64, infoPort=39003, infoSecurePort=0, ipcPort=45661, storageInfo=lv=-57;cid=testClusterID;nsid=1700349507;c=1689089610610), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:33:33,204 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3622697c78d24318: Processing first storage report for DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a from datanode 384fbc8b-0e27-4fb5-8fd6-b2327a4479cf 2023-07-11 15:33:33,204 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3622697c78d24318: from storage DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a node DatanodeRegistration(127.0.0.1:39613, datanodeUuid=384fbc8b-0e27-4fb5-8fd6-b2327a4479cf, infoPort=33927, infoSecurePort=0, ipcPort=40373, storageInfo=lv=-57;cid=testClusterID;nsid=1700349507;c=1689089610610), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:33:33,204 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xba6ab7416da976a2: Processing first storage report for DS-e50efa8d-7087-430e-b096-967a571e4cdf from datanode 8e797f1a-c0f5-4c95-bc13-30e2cd63813a 2023-07-11 15:33:33,205 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xba6ab7416da976a2: from storage DS-e50efa8d-7087-430e-b096-967a571e4cdf node DatanodeRegistration(127.0.0.1:34437, datanodeUuid=8e797f1a-c0f5-4c95-bc13-30e2cd63813a, infoPort=45433, infoSecurePort=0, ipcPort=46559, storageInfo=lv=-57;cid=testClusterID;nsid=1700349507;c=1689089610610), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:33:33,205 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c2b3782b2acdbd6: Processing first storage report for DS-d4870652-8df0-4794-aaa5-f3f793e89232 from datanode c471a55d-9d41-4630-9b0d-cc9662e3bd64 2023-07-11 15:33:33,205 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c2b3782b2acdbd6: from storage DS-d4870652-8df0-4794-aaa5-f3f793e89232 node DatanodeRegistration(127.0.0.1:45239, datanodeUuid=c471a55d-9d41-4630-9b0d-cc9662e3bd64, infoPort=39003, infoSecurePort=0, ipcPort=45661, storageInfo=lv=-57;cid=testClusterID;nsid=1700349507;c=1689089610610), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:33:33,205 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3622697c78d24318: Processing first storage report for DS-565ed0c4-9ba8-403e-83a8-65b8bd96fa5a from datanode 384fbc8b-0e27-4fb5-8fd6-b2327a4479cf 2023-07-11 15:33:33,205 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3622697c78d24318: from storage DS-565ed0c4-9ba8-403e-83a8-65b8bd96fa5a node DatanodeRegistration(127.0.0.1:39613, datanodeUuid=384fbc8b-0e27-4fb5-8fd6-b2327a4479cf, infoPort=33927, infoSecurePort=0, ipcPort=40373, storageInfo=lv=-57;cid=testClusterID;nsid=1700349507;c=1689089610610), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:33:33,440 DEBUG [Listener at localhost/45661] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53 2023-07-11 15:33:33,520 INFO [Listener at localhost/45661] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/zookeeper_0, clientPort=49791, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-11 15:33:33,535 INFO [Listener at localhost/45661] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49791 2023-07-11 15:33:33,546 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:33,548 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:34,207 INFO [Listener at localhost/45661] util.FSUtils(471): Created version file at hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c with version=8 2023-07-11 15:33:34,207 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/hbase-staging 2023-07-11 15:33:34,217 DEBUG [Listener at localhost/45661] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-11 15:33:34,217 DEBUG [Listener at localhost/45661] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-11 15:33:34,218 DEBUG [Listener at localhost/45661] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-11 15:33:34,218 DEBUG [Listener at localhost/45661] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-11 15:33:34,577 INFO [Listener at localhost/45661] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-11 15:33:35,133 INFO [Listener at localhost/45661] client.ConnectionUtils(127): master/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:33:35,176 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:35,177 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:35,177 INFO [Listener at localhost/45661] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:33:35,177 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:35,177 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:33:35,325 INFO [Listener at localhost/45661] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:33:35,402 DEBUG [Listener at localhost/45661] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-11 15:33:35,499 INFO [Listener at localhost/45661] ipc.NettyRpcServer(120): Bind to /172.31.2.10:44179 2023-07-11 15:33:35,510 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:35,512 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:35,532 INFO [Listener at localhost/45661] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44179 connecting to ZooKeeper ensemble=127.0.0.1:49791 2023-07-11 15:33:35,574 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:441790x0, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:33:35,577 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44179-0x10154f6e2600000 connected 2023-07-11 15:33:35,611 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:33:35,612 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:33:35,616 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:33:35,625 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44179 2023-07-11 15:33:35,625 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44179 2023-07-11 15:33:35,626 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44179 2023-07-11 15:33:35,627 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44179 2023-07-11 15:33:35,627 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44179 2023-07-11 15:33:35,659 INFO [Listener at localhost/45661] log.Log(170): Logging initialized @7153ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-11 15:33:35,799 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:33:35,799 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:33:35,800 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:33:35,802 INFO [Listener at localhost/45661] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-11 15:33:35,803 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:33:35,803 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:33:35,808 INFO [Listener at localhost/45661] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:33:35,894 INFO [Listener at localhost/45661] http.HttpServer(1146): Jetty bound to port 33091 2023-07-11 15:33:35,896 INFO [Listener at localhost/45661] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:33:35,941 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:35,945 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d059f15{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:33:35,946 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:35,946 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@758a4850{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:33:36,195 INFO [Listener at localhost/45661] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:33:36,209 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:33:36,209 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:33:36,211 INFO [Listener at localhost/45661] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:33:36,220 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:36,249 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@33f992f8{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir/jetty-0_0_0_0-33091-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1877263672527713005/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 15:33:36,264 INFO [Listener at localhost/45661] server.AbstractConnector(333): Started ServerConnector@4eb5bcb4{HTTP/1.1, (http/1.1)}{0.0.0.0:33091} 2023-07-11 15:33:36,264 INFO [Listener at localhost/45661] server.Server(415): Started @7758ms 2023-07-11 15:33:36,268 INFO [Listener at localhost/45661] master.HMaster(444): hbase.rootdir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c, hbase.cluster.distributed=false 2023-07-11 15:33:36,371 INFO [Listener at localhost/45661] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:33:36,372 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:36,372 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:36,372 INFO [Listener at localhost/45661] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:33:36,372 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:36,373 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:33:36,381 INFO [Listener at localhost/45661] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:33:36,386 INFO [Listener at localhost/45661] ipc.NettyRpcServer(120): Bind to /172.31.2.10:43957 2023-07-11 15:33:36,390 INFO [Listener at localhost/45661] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:33:36,421 DEBUG [Listener at localhost/45661] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:33:36,423 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:36,426 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:36,428 INFO [Listener at localhost/45661] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43957 connecting to ZooKeeper ensemble=127.0.0.1:49791 2023-07-11 15:33:36,471 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:439570x0, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:33:36,473 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:439570x0, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:33:36,493 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:439570x0, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:33:36,493 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43957-0x10154f6e2600001 connected 2023-07-11 15:33:36,494 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:33:36,497 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43957 2023-07-11 15:33:36,498 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43957 2023-07-11 15:33:36,498 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43957 2023-07-11 15:33:36,499 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43957 2023-07-11 15:33:36,499 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43957 2023-07-11 15:33:36,502 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:33:36,502 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:33:36,503 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:33:36,504 INFO [Listener at localhost/45661] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:33:36,504 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:33:36,504 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:33:36,504 INFO [Listener at localhost/45661] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:33:36,506 INFO [Listener at localhost/45661] http.HttpServer(1146): Jetty bound to port 37665 2023-07-11 15:33:36,507 INFO [Listener at localhost/45661] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:33:36,510 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:36,511 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69038eaf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:33:36,511 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:36,512 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5016aa5d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:33:36,647 INFO [Listener at localhost/45661] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:33:36,649 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:33:36,650 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:33:36,650 INFO [Listener at localhost/45661] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:33:36,652 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:36,656 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5eb331b4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir/jetty-0_0_0_0-37665-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5876922031534085553/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:33:36,657 INFO [Listener at localhost/45661] server.AbstractConnector(333): Started ServerConnector@3a7e07f7{HTTP/1.1, (http/1.1)}{0.0.0.0:37665} 2023-07-11 15:33:36,657 INFO [Listener at localhost/45661] server.Server(415): Started @8151ms 2023-07-11 15:33:36,670 INFO [Listener at localhost/45661] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:33:36,670 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:36,670 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:36,671 INFO [Listener at localhost/45661] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:33:36,671 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:36,671 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:33:36,671 INFO [Listener at localhost/45661] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:33:36,673 INFO [Listener at localhost/45661] ipc.NettyRpcServer(120): Bind to /172.31.2.10:42495 2023-07-11 15:33:36,673 INFO [Listener at localhost/45661] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:33:36,674 DEBUG [Listener at localhost/45661] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:33:36,675 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:36,678 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:36,680 INFO [Listener at localhost/45661] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42495 connecting to ZooKeeper ensemble=127.0.0.1:49791 2023-07-11 15:33:36,683 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:424950x0, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:33:36,684 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42495-0x10154f6e2600002 connected 2023-07-11 15:33:36,685 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:33:36,685 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:33:36,686 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:33:36,689 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42495 2023-07-11 15:33:36,690 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42495 2023-07-11 15:33:36,690 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42495 2023-07-11 15:33:36,690 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42495 2023-07-11 15:33:36,690 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42495 2023-07-11 15:33:36,693 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:33:36,693 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:33:36,693 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:33:36,693 INFO [Listener at localhost/45661] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:33:36,693 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:33:36,694 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:33:36,694 INFO [Listener at localhost/45661] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:33:36,694 INFO [Listener at localhost/45661] http.HttpServer(1146): Jetty bound to port 45301 2023-07-11 15:33:36,695 INFO [Listener at localhost/45661] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:33:36,702 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:36,702 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@328311bb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:33:36,702 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:36,703 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@256bb865{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:33:36,840 INFO [Listener at localhost/45661] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:33:36,841 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:33:36,841 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:33:36,842 INFO [Listener at localhost/45661] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:33:36,843 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:36,843 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7fa26dec{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir/jetty-0_0_0_0-45301-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3393621837321567496/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:33:36,845 INFO [Listener at localhost/45661] server.AbstractConnector(333): Started ServerConnector@59a4b1f9{HTTP/1.1, (http/1.1)}{0.0.0.0:45301} 2023-07-11 15:33:36,845 INFO [Listener at localhost/45661] server.Server(415): Started @8338ms 2023-07-11 15:33:36,857 INFO [Listener at localhost/45661] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:33:36,857 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:36,858 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:36,858 INFO [Listener at localhost/45661] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:33:36,858 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:36,858 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:33:36,858 INFO [Listener at localhost/45661] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:33:36,859 INFO [Listener at localhost/45661] ipc.NettyRpcServer(120): Bind to /172.31.2.10:36133 2023-07-11 15:33:36,860 INFO [Listener at localhost/45661] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:33:36,861 DEBUG [Listener at localhost/45661] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:33:36,862 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:36,864 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:36,865 INFO [Listener at localhost/45661] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36133 connecting to ZooKeeper ensemble=127.0.0.1:49791 2023-07-11 15:33:36,869 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:361330x0, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:33:36,869 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:361330x0, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:33:36,871 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:361330x0, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:33:36,871 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36133-0x10154f6e2600003 connected 2023-07-11 15:33:36,873 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:33:36,875 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36133 2023-07-11 15:33:36,876 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36133 2023-07-11 15:33:36,876 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36133 2023-07-11 15:33:36,876 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36133 2023-07-11 15:33:36,877 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36133 2023-07-11 15:33:36,879 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:33:36,879 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:33:36,879 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:33:36,879 INFO [Listener at localhost/45661] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:33:36,880 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:33:36,880 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:33:36,880 INFO [Listener at localhost/45661] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:33:36,881 INFO [Listener at localhost/45661] http.HttpServer(1146): Jetty bound to port 41543 2023-07-11 15:33:36,881 INFO [Listener at localhost/45661] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:33:36,889 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:36,889 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@727c7cf0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:33:36,890 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:36,890 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7e20f29d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:33:37,007 INFO [Listener at localhost/45661] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:33:37,008 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:33:37,008 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:33:37,008 INFO [Listener at localhost/45661] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:33:37,009 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:37,010 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@15b217d2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir/jetty-0_0_0_0-41543-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6592611108251734425/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:33:37,012 INFO [Listener at localhost/45661] server.AbstractConnector(333): Started ServerConnector@51804693{HTTP/1.1, (http/1.1)}{0.0.0.0:41543} 2023-07-11 15:33:37,012 INFO [Listener at localhost/45661] server.Server(415): Started @8506ms 2023-07-11 15:33:37,018 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:33:37,024 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@46817a75{HTTP/1.1, (http/1.1)}{0.0.0.0:43865} 2023-07-11 15:33:37,024 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(415): Started @8518ms 2023-07-11 15:33:37,024 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:33:37,034 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 15:33:37,035 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:33:37,056 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:33:37,056 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:33:37,056 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:37,056 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:33:37,056 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:33:37,058 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 15:33:37,059 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase9.apache.org,44179,1689089614389 from backup master directory 2023-07-11 15:33:37,061 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 15:33:37,064 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:33:37,064 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 15:33:37,065 WARN [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:33:37,065 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:33:37,069 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-11 15:33:37,071 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-11 15:33:37,182 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/hbase.id with ID: 3d7174ae-a67c-4b60-9240-0a3c0cf3051d 2023-07-11 15:33:37,227 INFO [master/jenkins-hbase9:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:37,243 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:37,297 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x569c6504 to 127.0.0.1:49791 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:33:37,321 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3a11796d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:33:37,345 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:37,347 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-11 15:33:37,369 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-11 15:33:37,369 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-11 15:33:37,371 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-11 15:33:37,375 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-11 15:33:37,376 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:33:37,416 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/data/master/store-tmp 2023-07-11 15:33:37,452 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:37,452 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 15:33:37,452 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:33:37,452 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:33:37,452 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 15:33:37,452 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:33:37,452 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:33:37,452 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 15:33:37,454 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/WALs/jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:33:37,475 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C44179%2C1689089614389, suffix=, logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/WALs/jenkins-hbase9.apache.org,44179,1689089614389, archiveDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/oldWALs, maxLogs=10 2023-07-11 15:33:37,537 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK] 2023-07-11 15:33:37,537 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK] 2023-07-11 15:33:37,537 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK] 2023-07-11 15:33:37,545 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-11 15:33:37,618 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/WALs/jenkins-hbase9.apache.org,44179,1689089614389/jenkins-hbase9.apache.org%2C44179%2C1689089614389.1689089617485 2023-07-11 15:33:37,619 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK], DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK], DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK]] 2023-07-11 15:33:37,620 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:37,620 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:37,625 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:33:37,626 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:33:37,714 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:33:37,722 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-11 15:33:37,763 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-11 15:33:37,779 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:37,785 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:33:37,788 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:33:37,808 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:33:37,813 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:37,814 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11539443200, jitterRate=0.07469439506530762}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:37,814 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 15:33:37,817 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-11 15:33:37,843 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-11 15:33:37,843 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-11 15:33:37,847 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-11 15:33:37,849 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-11 15:33:37,892 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 42 msec 2023-07-11 15:33:37,892 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-11 15:33:37,916 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-11 15:33:37,923 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-11 15:33:37,931 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-11 15:33:37,937 INFO [master/jenkins-hbase9:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-11 15:33:37,942 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-11 15:33:37,946 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:37,948 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-11 15:33:37,949 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-11 15:33:37,967 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-11 15:33:37,972 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:33:37,972 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:33:37,972 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:33:37,973 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:37,972 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:33:37,973 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase9.apache.org,44179,1689089614389, sessionid=0x10154f6e2600000, setting cluster-up flag (Was=false) 2023-07-11 15:33:37,996 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:38,007 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-11 15:33:38,009 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:33:38,016 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:38,022 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-11 15:33:38,024 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:33:38,028 WARN [master/jenkins-hbase9:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.hbase-snapshot/.tmp 2023-07-11 15:33:38,120 INFO [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(951): ClusterId : 3d7174ae-a67c-4b60-9240-0a3c0cf3051d 2023-07-11 15:33:38,120 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(951): ClusterId : 3d7174ae-a67c-4b60-9240-0a3c0cf3051d 2023-07-11 15:33:38,120 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(951): ClusterId : 3d7174ae-a67c-4b60-9240-0a3c0cf3051d 2023-07-11 15:33:38,130 DEBUG [RS:1;jenkins-hbase9:42495] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:33:38,130 DEBUG [RS:2;jenkins-hbase9:36133] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:33:38,130 DEBUG [RS:0;jenkins-hbase9:43957] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:33:38,136 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-11 15:33:38,139 DEBUG [RS:2;jenkins-hbase9:36133] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:33:38,139 DEBUG [RS:2;jenkins-hbase9:36133] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:33:38,139 DEBUG [RS:0;jenkins-hbase9:43957] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:33:38,139 DEBUG [RS:1;jenkins-hbase9:42495] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:33:38,139 DEBUG [RS:1;jenkins-hbase9:42495] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:33:38,139 DEBUG [RS:0;jenkins-hbase9:43957] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:33:38,143 DEBUG [RS:1;jenkins-hbase9:42495] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:33:38,143 DEBUG [RS:2;jenkins-hbase9:36133] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:33:38,143 DEBUG [RS:0;jenkins-hbase9:43957] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:33:38,146 DEBUG [RS:1;jenkins-hbase9:42495] zookeeper.ReadOnlyZKClient(139): Connect 0x14159c69 to 127.0.0.1:49791 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:33:38,146 DEBUG [RS:2;jenkins-hbase9:36133] zookeeper.ReadOnlyZKClient(139): Connect 0x097c2907 to 127.0.0.1:49791 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:33:38,147 DEBUG [RS:0;jenkins-hbase9:43957] zookeeper.ReadOnlyZKClient(139): Connect 0x5c3745f1 to 127.0.0.1:49791 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:33:38,159 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-11 15:33:38,164 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-11 15:33:38,164 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-11 15:33:38,160 DEBUG [RS:2;jenkins-hbase9:36133] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c7ea64c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:33:38,162 DEBUG [RS:1;jenkins-hbase9:42495] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ac1fcc6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:33:38,166 DEBUG [RS:0;jenkins-hbase9:43957] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f8679b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:33:38,166 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:33:38,166 DEBUG [RS:1;jenkins-hbase9:42495] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@596dd0c9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:33:38,166 DEBUG [RS:2;jenkins-hbase9:36133] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d48c24e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:33:38,166 DEBUG [RS:0;jenkins-hbase9:43957] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@21cd2b1d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:33:38,203 DEBUG [RS:1;jenkins-hbase9:42495] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase9:42495 2023-07-11 15:33:38,203 DEBUG [RS:2;jenkins-hbase9:36133] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase9:36133 2023-07-11 15:33:38,207 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase9:43957 2023-07-11 15:33:38,211 INFO [RS:1;jenkins-hbase9:42495] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:33:38,211 INFO [RS:0;jenkins-hbase9:43957] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:33:38,212 INFO [RS:1;jenkins-hbase9:42495] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:33:38,212 INFO [RS:2;jenkins-hbase9:36133] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:33:38,212 DEBUG [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:33:38,212 INFO [RS:0;jenkins-hbase9:43957] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:33:38,212 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:33:38,212 INFO [RS:2;jenkins-hbase9:36133] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:33:38,213 DEBUG [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:33:38,217 INFO [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,44179,1689089614389 with isa=jenkins-hbase9.apache.org/172.31.2.10:42495, startcode=1689089616669 2023-07-11 15:33:38,217 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,44179,1689089614389 with isa=jenkins-hbase9.apache.org/172.31.2.10:43957, startcode=1689089616370 2023-07-11 15:33:38,221 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,44179,1689089614389 with isa=jenkins-hbase9.apache.org/172.31.2.10:36133, startcode=1689089616857 2023-07-11 15:33:38,244 DEBUG [RS:1;jenkins-hbase9:42495] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:33:38,244 DEBUG [RS:2;jenkins-hbase9:36133] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:33:38,244 DEBUG [RS:0;jenkins-hbase9:43957] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:33:38,302 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-11 15:33:38,319 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:44505, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:33:38,319 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:49345, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:33:38,319 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:57531, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:33:38,334 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:38,372 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:38,373 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:38,394 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 15:33:38,402 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 15:33:38,402 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 15:33:38,403 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 15:33:38,405 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:33:38,405 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:33:38,405 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:33:38,407 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(2830): Master is not running yet 2023-07-11 15:33:38,407 DEBUG [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(2830): Master is not running yet 2023-07-11 15:33:38,408 WARN [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-11 15:33:38,406 DEBUG [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(2830): Master is not running yet 2023-07-11 15:33:38,407 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:33:38,408 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase9:0, corePoolSize=10, maxPoolSize=10 2023-07-11 15:33:38,408 WARN [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-11 15:33:38,408 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,409 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:33:38,409 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,408 WARN [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-11 15:33:38,421 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689089648421 2023-07-11 15:33:38,425 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-11 15:33:38,431 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-11 15:33:38,437 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 15:33:38,437 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-11 15:33:38,440 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-11 15:33:38,440 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:38,440 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-11 15:33:38,441 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-11 15:33:38,441 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-11 15:33:38,443 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,445 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-11 15:33:38,448 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-11 15:33:38,448 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-11 15:33:38,452 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-11 15:33:38,453 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-11 15:33:38,457 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089618455,5,FailOnTimeoutGroup] 2023-07-11 15:33:38,458 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089618458,5,FailOnTimeoutGroup] 2023-07-11 15:33:38,458 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,458 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-11 15:33:38,460 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,461 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,510 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,44179,1689089614389 with isa=jenkins-hbase9.apache.org/172.31.2.10:36133, startcode=1689089616857 2023-07-11 15:33:38,511 INFO [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,44179,1689089614389 with isa=jenkins-hbase9.apache.org/172.31.2.10:42495, startcode=1689089616669 2023-07-11 15:33:38,511 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,44179,1689089614389 with isa=jenkins-hbase9.apache.org/172.31.2.10:43957, startcode=1689089616370 2023-07-11 15:33:38,526 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44179] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:38,528 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:33:38,531 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:38,532 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:38,533 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c 2023-07-11 15:33:38,533 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44179] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:38,534 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44179] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:38,534 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-11 15:33:38,536 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:33:38,536 DEBUG [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c 2023-07-11 15:33:38,536 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-11 15:33:38,536 DEBUG [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c 2023-07-11 15:33:38,536 DEBUG [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43853 2023-07-11 15:33:38,537 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c 2023-07-11 15:33:38,536 DEBUG [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43853 2023-07-11 15:33:38,537 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43853 2023-07-11 15:33:38,537 DEBUG [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33091 2023-07-11 15:33:38,537 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33091 2023-07-11 15:33:38,537 DEBUG [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33091 2023-07-11 15:33:38,557 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:33:38,559 DEBUG [RS:1;jenkins-hbase9:42495] zookeeper.ZKUtil(162): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:38,559 WARN [RS:1;jenkins-hbase9:42495] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:33:38,559 DEBUG [RS:0;jenkins-hbase9:43957] zookeeper.ZKUtil(162): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:38,559 DEBUG [RS:2;jenkins-hbase9:36133] zookeeper.ZKUtil(162): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:38,559 INFO [RS:1;jenkins-hbase9:42495] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:33:38,560 DEBUG [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:38,559 WARN [RS:0;jenkins-hbase9:43957] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:33:38,559 WARN [RS:2;jenkins-hbase9:36133] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:33:38,567 INFO [RS:2;jenkins-hbase9:36133] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:33:38,567 DEBUG [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:38,567 INFO [RS:0;jenkins-hbase9:43957] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:33:38,568 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:38,570 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,42495,1689089616669] 2023-07-11 15:33:38,570 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,36133,1689089616857] 2023-07-11 15:33:38,570 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,43957,1689089616370] 2023-07-11 15:33:38,605 DEBUG [RS:2;jenkins-hbase9:36133] zookeeper.ZKUtil(162): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:38,605 DEBUG [RS:1;jenkins-hbase9:42495] zookeeper.ZKUtil(162): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:38,605 DEBUG [RS:0;jenkins-hbase9:43957] zookeeper.ZKUtil(162): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:38,607 DEBUG [RS:1;jenkins-hbase9:42495] zookeeper.ZKUtil(162): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:38,607 DEBUG [RS:0;jenkins-hbase9:43957] zookeeper.ZKUtil(162): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:38,608 DEBUG [RS:2;jenkins-hbase9:36133] zookeeper.ZKUtil(162): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:38,608 DEBUG [RS:0;jenkins-hbase9:43957] zookeeper.ZKUtil(162): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:38,608 DEBUG [RS:1;jenkins-hbase9:42495] zookeeper.ZKUtil(162): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:38,609 DEBUG [RS:2;jenkins-hbase9:36133] zookeeper.ZKUtil(162): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:38,610 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:38,615 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 15:33:38,620 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info 2023-07-11 15:33:38,621 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 15:33:38,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:38,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 15:33:38,625 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:33:38,630 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 15:33:38,630 DEBUG [RS:2;jenkins-hbase9:36133] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:33:38,630 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:33:38,631 DEBUG [RS:1;jenkins-hbase9:42495] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:33:38,648 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:38,649 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 15:33:38,652 INFO [RS:0;jenkins-hbase9:43957] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:33:38,652 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table 2023-07-11 15:33:38,653 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 15:33:38,654 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:38,656 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740 2023-07-11 15:33:38,659 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740 2023-07-11 15:33:38,660 INFO [RS:2;jenkins-hbase9:36133] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:33:38,659 INFO [RS:1;jenkins-hbase9:42495] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:33:38,666 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 15:33:38,674 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 15:33:38,707 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:38,708 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10624879840, jitterRate=-0.010480955243110657}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 15:33:38,709 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 15:33:38,710 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 15:33:38,710 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 15:33:38,711 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 15:33:38,711 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 15:33:38,711 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 15:33:38,739 INFO [RS:1;jenkins-hbase9:42495] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:33:38,748 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 15:33:38,748 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 15:33:38,745 INFO [RS:0;jenkins-hbase9:43957] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:33:38,748 INFO [RS:2;jenkins-hbase9:36133] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:33:38,755 INFO [RS:0;jenkins-hbase9:43957] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:33:38,755 INFO [RS:0;jenkins-hbase9:43957] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,757 INFO [RS:1;jenkins-hbase9:42495] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:33:38,762 INFO [RS:2;jenkins-hbase9:36133] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:33:38,762 INFO [RS:1;jenkins-hbase9:42495] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,766 INFO [RS:2;jenkins-hbase9:36133] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,766 INFO [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:33:38,765 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 15:33:38,766 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:33:38,767 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:33:38,766 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-11 15:33:38,778 INFO [RS:2;jenkins-hbase9:36133] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,778 INFO [RS:0;jenkins-hbase9:43957] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,779 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,779 INFO [RS:1;jenkins-hbase9:42495] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,779 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,779 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,779 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,780 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,780 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,780 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,780 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,780 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,780 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:33:38,780 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,780 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,780 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:33:38,780 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,781 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,780 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-11 15:33:38,781 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,781 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,779 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,781 DEBUG [RS:2;jenkins-hbase9:36133] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,781 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,781 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,781 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,782 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,782 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:33:38,782 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,782 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,782 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,782 DEBUG [RS:0;jenkins-hbase9:43957] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,781 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,782 DEBUG [RS:1;jenkins-hbase9:42495] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:38,806 INFO [RS:1;jenkins-hbase9:42495] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,806 INFO [RS:1;jenkins-hbase9:42495] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,806 INFO [RS:1;jenkins-hbase9:42495] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,810 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-11 15:33:38,813 INFO [RS:0;jenkins-hbase9:43957] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,814 INFO [RS:0;jenkins-hbase9:43957] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,814 INFO [RS:0;jenkins-hbase9:43957] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,815 INFO [RS:2;jenkins-hbase9:36133] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,815 INFO [RS:2;jenkins-hbase9:36133] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,815 INFO [RS:2;jenkins-hbase9:36133] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,820 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-11 15:33:38,835 INFO [RS:1;jenkins-hbase9:42495] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:33:38,836 INFO [RS:0;jenkins-hbase9:43957] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:33:38,844 INFO [RS:2;jenkins-hbase9:36133] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:33:38,845 INFO [RS:1;jenkins-hbase9:42495] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,42495,1689089616669-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,846 INFO [RS:2;jenkins-hbase9:36133] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,36133,1689089616857-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,849 INFO [RS:0;jenkins-hbase9:43957] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43957,1689089616370-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:38,880 INFO [RS:2;jenkins-hbase9:36133] regionserver.Replication(203): jenkins-hbase9.apache.org,36133,1689089616857 started 2023-07-11 15:33:38,881 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,36133,1689089616857, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:36133, sessionid=0x10154f6e2600003 2023-07-11 15:33:38,881 INFO [RS:0;jenkins-hbase9:43957] regionserver.Replication(203): jenkins-hbase9.apache.org,43957,1689089616370 started 2023-07-11 15:33:38,881 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,43957,1689089616370, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:43957, sessionid=0x10154f6e2600001 2023-07-11 15:33:38,881 DEBUG [RS:2;jenkins-hbase9:36133] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:33:38,881 DEBUG [RS:0;jenkins-hbase9:43957] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:33:38,881 DEBUG [RS:2;jenkins-hbase9:36133] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:38,882 DEBUG [RS:2;jenkins-hbase9:36133] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,36133,1689089616857' 2023-07-11 15:33:38,882 DEBUG [RS:2;jenkins-hbase9:36133] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:33:38,881 DEBUG [RS:0;jenkins-hbase9:43957] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:38,882 INFO [RS:1;jenkins-hbase9:42495] regionserver.Replication(203): jenkins-hbase9.apache.org,42495,1689089616669 started 2023-07-11 15:33:38,884 DEBUG [RS:0;jenkins-hbase9:43957] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,43957,1689089616370' 2023-07-11 15:33:38,884 INFO [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,42495,1689089616669, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:42495, sessionid=0x10154f6e2600002 2023-07-11 15:33:38,885 DEBUG [RS:0;jenkins-hbase9:43957] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:33:38,885 DEBUG [RS:1;jenkins-hbase9:42495] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:33:38,885 DEBUG [RS:1;jenkins-hbase9:42495] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:38,885 DEBUG [RS:1;jenkins-hbase9:42495] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,42495,1689089616669' 2023-07-11 15:33:38,885 DEBUG [RS:1;jenkins-hbase9:42495] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:33:38,891 DEBUG [RS:2;jenkins-hbase9:36133] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:33:38,891 DEBUG [RS:0;jenkins-hbase9:43957] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:33:38,892 DEBUG [RS:1;jenkins-hbase9:42495] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:33:38,892 DEBUG [RS:2;jenkins-hbase9:36133] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:33:38,892 DEBUG [RS:0;jenkins-hbase9:43957] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:33:38,892 DEBUG [RS:2;jenkins-hbase9:36133] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:33:38,892 DEBUG [RS:0;jenkins-hbase9:43957] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:33:38,892 DEBUG [RS:2;jenkins-hbase9:36133] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:38,892 DEBUG [RS:0;jenkins-hbase9:43957] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:38,900 DEBUG [RS:0;jenkins-hbase9:43957] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,43957,1689089616370' 2023-07-11 15:33:38,900 DEBUG [RS:0;jenkins-hbase9:43957] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:33:38,900 DEBUG [RS:2;jenkins-hbase9:36133] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,36133,1689089616857' 2023-07-11 15:33:38,900 DEBUG [RS:1;jenkins-hbase9:42495] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:33:38,900 DEBUG [RS:2;jenkins-hbase9:36133] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:33:38,901 DEBUG [RS:1;jenkins-hbase9:42495] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:33:38,901 DEBUG [RS:1;jenkins-hbase9:42495] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:38,901 DEBUG [RS:0;jenkins-hbase9:43957] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:33:38,901 DEBUG [RS:1;jenkins-hbase9:42495] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,42495,1689089616669' 2023-07-11 15:33:38,901 DEBUG [RS:1;jenkins-hbase9:42495] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:33:38,901 DEBUG [RS:2;jenkins-hbase9:36133] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:33:38,902 DEBUG [RS:0;jenkins-hbase9:43957] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:33:38,902 INFO [RS:0;jenkins-hbase9:43957] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 15:33:38,902 INFO [RS:0;jenkins-hbase9:43957] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 15:33:38,911 DEBUG [RS:1;jenkins-hbase9:42495] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:33:38,911 DEBUG [RS:2;jenkins-hbase9:36133] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:33:38,912 INFO [RS:2;jenkins-hbase9:36133] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 15:33:38,912 INFO [RS:2;jenkins-hbase9:36133] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 15:33:38,913 DEBUG [RS:1;jenkins-hbase9:42495] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:33:38,914 INFO [RS:1;jenkins-hbase9:42495] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 15:33:38,914 INFO [RS:1;jenkins-hbase9:42495] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 15:33:38,972 DEBUG [jenkins-hbase9:44179] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-11 15:33:38,991 DEBUG [jenkins-hbase9:44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:38,993 DEBUG [jenkins-hbase9:44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:38,993 DEBUG [jenkins-hbase9:44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:38,993 DEBUG [jenkins-hbase9:44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:38,993 DEBUG [jenkins-hbase9:44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:38,997 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,42495,1689089616669, state=OPENING 2023-07-11 15:33:39,006 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-11 15:33:39,007 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:39,008 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 15:33:39,013 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:39,068 WARN [ReadOnlyZKClient-127.0.0.1:49791@0x569c6504] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-11 15:33:39,074 INFO [RS:2;jenkins-hbase9:36133] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C36133%2C1689089616857, suffix=, logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,36133,1689089616857, archiveDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs, maxLogs=32 2023-07-11 15:33:39,074 INFO [RS:0;jenkins-hbase9:43957] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C43957%2C1689089616370, suffix=, logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,43957,1689089616370, archiveDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs, maxLogs=32 2023-07-11 15:33:39,074 INFO [RS:1;jenkins-hbase9:42495] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C42495%2C1689089616669, suffix=, logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,42495,1689089616669, archiveDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs, maxLogs=32 2023-07-11 15:33:39,109 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,44179,1689089614389] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:33:39,127 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:50064, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:33:39,133 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42495] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:50064 deadline: 1689089679127, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:39,147 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK] 2023-07-11 15:33:39,147 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK] 2023-07-11 15:33:39,152 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK] 2023-07-11 15:33:39,153 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK] 2023-07-11 15:33:39,165 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK] 2023-07-11 15:33:39,167 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK] 2023-07-11 15:33:39,168 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK] 2023-07-11 15:33:39,168 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK] 2023-07-11 15:33:39,169 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK] 2023-07-11 15:33:39,199 INFO [RS:1;jenkins-hbase9:42495] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,42495,1689089616669/jenkins-hbase9.apache.org%2C42495%2C1689089616669.1689089619088 2023-07-11 15:33:39,199 INFO [RS:0;jenkins-hbase9:43957] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,43957,1689089616370/jenkins-hbase9.apache.org%2C43957%2C1689089616370.1689089619089 2023-07-11 15:33:39,200 DEBUG [RS:1;jenkins-hbase9:42495] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK], DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK], DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK]] 2023-07-11 15:33:39,200 INFO [RS:2;jenkins-hbase9:36133] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,36133,1689089616857/jenkins-hbase9.apache.org%2C36133%2C1689089616857.1689089619089 2023-07-11 15:33:39,201 DEBUG [RS:0;jenkins-hbase9:43957] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK], DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK], DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK]] 2023-07-11 15:33:39,201 DEBUG [RS:2;jenkins-hbase9:36133] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK], DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK], DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK]] 2023-07-11 15:33:39,276 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:39,280 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:33:39,288 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:50072, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:33:39,305 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-11 15:33:39,305 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:33:39,309 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C42495%2C1689089616669.meta, suffix=.meta, logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,42495,1689089616669, archiveDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs, maxLogs=32 2023-07-11 15:33:39,343 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK] 2023-07-11 15:33:39,344 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK] 2023-07-11 15:33:39,344 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK] 2023-07-11 15:33:39,368 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,42495,1689089616669/jenkins-hbase9.apache.org%2C42495%2C1689089616669.meta.1689089619310.meta 2023-07-11 15:33:39,373 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK], DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK], DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK]] 2023-07-11 15:33:39,374 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:39,375 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 15:33:39,378 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-11 15:33:39,380 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-11 15:33:39,387 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-11 15:33:39,387 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:39,387 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-11 15:33:39,387 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-11 15:33:39,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 15:33:39,398 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info 2023-07-11 15:33:39,398 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info 2023-07-11 15:33:39,399 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 15:33:39,400 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:39,400 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 15:33:39,402 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:33:39,402 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:33:39,403 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 15:33:39,404 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:39,404 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 15:33:39,406 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table 2023-07-11 15:33:39,406 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table 2023-07-11 15:33:39,407 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 15:33:39,408 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:39,410 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740 2023-07-11 15:33:39,414 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740 2023-07-11 15:33:39,422 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 15:33:39,427 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 15:33:39,428 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9536949920, jitterRate=-0.11180232465267181}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 15:33:39,429 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 15:33:39,452 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689089619271 2023-07-11 15:33:39,491 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-11 15:33:39,495 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-11 15:33:39,496 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,42495,1689089616669, state=OPEN 2023-07-11 15:33:39,501 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 15:33:39,501 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 15:33:39,507 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-11 15:33:39,507 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,42495,1689089616669 in 488 msec 2023-07-11 15:33:39,516 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-11 15:33:39,516 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 730 msec 2023-07-11 15:33:39,523 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.3460 sec 2023-07-11 15:33:39,523 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689089619523, completionTime=-1 2023-07-11 15:33:39,523 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-11 15:33:39,524 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-11 15:33:39,629 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-11 15:33:39,629 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689089679629 2023-07-11 15:33:39,629 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689089739629 2023-07-11 15:33:39,629 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 105 msec 2023-07-11 15:33:39,656 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,44179,1689089614389-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:39,657 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,44179,1689089614389-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:39,657 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,44179,1689089614389-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:39,660 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase9:44179, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:39,660 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:39,672 DEBUG [master/jenkins-hbase9:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-11 15:33:39,686 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-11 15:33:39,690 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:39,705 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,44179,1689089614389] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:39,712 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,44179,1689089614389] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-11 15:33:39,712 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-11 15:33:39,720 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:33:39,728 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:33:39,734 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:33:39,742 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:39,742 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:33:39,751 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:39,757 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe empty. 2023-07-11 15:33:39,761 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc empty. 2023-07-11 15:33:39,761 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:39,761 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-11 15:33:39,761 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:39,761 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-11 15:33:39,864 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:39,865 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:39,868 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 48c2bf7782ee61fbc67dfe7aa5f38abc, NAME => 'hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:39,868 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => db11ce5f2f749a24653755c2ee31ecfe, NAME => 'hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:39,934 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:39,934 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing db11ce5f2f749a24653755c2ee31ecfe, disabling compactions & flushes 2023-07-11 15:33:39,934 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:39,934 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:39,935 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. after waiting 0 ms 2023-07-11 15:33:39,935 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:39,935 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:39,935 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for db11ce5f2f749a24653755c2ee31ecfe: 2023-07-11 15:33:39,950 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:39,951 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 48c2bf7782ee61fbc67dfe7aa5f38abc, disabling compactions & flushes 2023-07-11 15:33:39,951 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:39,951 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:39,951 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. after waiting 0 ms 2023-07-11 15:33:39,951 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:39,951 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:39,951 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 48c2bf7782ee61fbc67dfe7aa5f38abc: 2023-07-11 15:33:39,954 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:33:39,961 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:33:39,973 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089619956"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089619956"}]},"ts":"1689089619956"} 2023-07-11 15:33:39,973 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089619962"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089619962"}]},"ts":"1689089619962"} 2023-07-11 15:33:40,008 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:33:40,010 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:33:40,012 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:33:40,014 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:33:40,018 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089620015"}]},"ts":"1689089620015"} 2023-07-11 15:33:40,018 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089620010"}]},"ts":"1689089620010"} 2023-07-11 15:33:40,029 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-11 15:33:40,034 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:40,034 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:40,034 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:40,034 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:40,034 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:40,037 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=48c2bf7782ee61fbc67dfe7aa5f38abc, ASSIGN}] 2023-07-11 15:33:40,039 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-11 15:33:40,041 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=48c2bf7782ee61fbc67dfe7aa5f38abc, ASSIGN 2023-07-11 15:33:40,043 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=48c2bf7782ee61fbc67dfe7aa5f38abc, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42495,1689089616669; forceNewPlan=false, retain=false 2023-07-11 15:33:40,046 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:40,046 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:40,046 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:40,046 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:40,046 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:40,047 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, ASSIGN}] 2023-07-11 15:33:40,050 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, ASSIGN 2023-07-11 15:33:40,054 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42495,1689089616669; forceNewPlan=false, retain=false 2023-07-11 15:33:40,055 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-11 15:33:40,057 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=48c2bf7782ee61fbc67dfe7aa5f38abc, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:40,058 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089620057"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089620057"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089620057"}]},"ts":"1689089620057"} 2023-07-11 15:33:40,058 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:40,058 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089620058"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089620058"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089620058"}]},"ts":"1689089620058"} 2023-07-11 15:33:40,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 48c2bf7782ee61fbc67dfe7aa5f38abc, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:40,067 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:40,234 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:40,234 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 48c2bf7782ee61fbc67dfe7aa5f38abc, NAME => 'hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:40,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 15:33:40,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. service=MultiRowMutationService 2023-07-11 15:33:40,236 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-11 15:33:40,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:40,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:40,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:40,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:40,246 INFO [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:40,253 DEBUG [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/m 2023-07-11 15:33:40,253 DEBUG [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/m 2023-07-11 15:33:40,254 INFO [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 48c2bf7782ee61fbc67dfe7aa5f38abc columnFamilyName m 2023-07-11 15:33:40,255 INFO [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] regionserver.HStore(310): Store=48c2bf7782ee61fbc67dfe7aa5f38abc/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:40,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:40,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:40,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:40,270 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:40,271 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 48c2bf7782ee61fbc67dfe7aa5f38abc; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7812f545, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:40,271 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 48c2bf7782ee61fbc67dfe7aa5f38abc: 2023-07-11 15:33:40,277 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc., pid=8, masterSystemTime=1689089620220 2023-07-11 15:33:40,284 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=48c2bf7782ee61fbc67dfe7aa5f38abc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:40,285 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:40,285 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089620283"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089620283"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089620283"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089620283"}]},"ts":"1689089620283"} 2023-07-11 15:33:40,285 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:40,285 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:40,285 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => db11ce5f2f749a24653755c2ee31ecfe, NAME => 'hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:40,286 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:40,286 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:40,286 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:40,286 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:40,302 INFO [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:40,305 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-11 15:33:40,306 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 48c2bf7782ee61fbc67dfe7aa5f38abc, server=jenkins-hbase9.apache.org,42495,1689089616669 in 234 msec 2023-07-11 15:33:40,306 DEBUG [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info 2023-07-11 15:33:40,306 DEBUG [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info 2023-07-11 15:33:40,307 INFO [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region db11ce5f2f749a24653755c2ee31ecfe columnFamilyName info 2023-07-11 15:33:40,308 INFO [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] regionserver.HStore(310): Store=db11ce5f2f749a24653755c2ee31ecfe/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:40,309 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:40,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:40,318 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-11 15:33:40,318 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=48c2bf7782ee61fbc67dfe7aa5f38abc, ASSIGN in 269 msec 2023-07-11 15:33:40,320 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:33:40,320 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089620320"}]},"ts":"1689089620320"} 2023-07-11 15:33:40,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:40,326 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:40,326 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-11 15:33:40,327 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened db11ce5f2f749a24653755c2ee31ecfe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11045541600, jitterRate=0.028696224093437195}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:40,327 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for db11ce5f2f749a24653755c2ee31ecfe: 2023-07-11 15:33:40,328 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe., pid=9, masterSystemTime=1689089620220 2023-07-11 15:33:40,331 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:40,331 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:40,332 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:40,332 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089620332"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089620332"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089620332"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089620332"}]},"ts":"1689089620332"} 2023-07-11 15:33:40,336 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:33:40,342 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 630 msec 2023-07-11 15:33:40,343 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-11 15:33:40,345 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,42495,1689089616669 in 269 msec 2023-07-11 15:33:40,350 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-11 15:33:40,350 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, ASSIGN in 298 msec 2023-07-11 15:33:40,353 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:33:40,353 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089620353"}]},"ts":"1689089620353"} 2023-07-11 15:33:40,355 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-11 15:33:40,359 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:33:40,362 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 668 msec 2023-07-11 15:33:40,429 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-11 15:33:40,431 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:33:40,431 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:40,452 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-11 15:33:40,452 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-11 15:33:40,473 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-11 15:33:40,503 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:33:40,514 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 55 msec 2023-07-11 15:33:40,518 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-11 15:33:40,531 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:33:40,541 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 20 msec 2023-07-11 15:33:40,548 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:40,548 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:40,551 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 15:33:40,557 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-11 15:33:40,558 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-11 15:33:40,564 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-11 15:33:40,564 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.499sec 2023-07-11 15:33:40,571 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-11 15:33:40,573 INFO [master/jenkins-hbase9:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-11 15:33:40,573 INFO [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-11 15:33:40,575 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,44179,1689089614389-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-11 15:33:40,576 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,44179,1689089614389-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-11 15:33:40,591 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-11 15:33:40,632 DEBUG [Listener at localhost/45661] zookeeper.ReadOnlyZKClient(139): Connect 0x7d54ff60 to 127.0.0.1:49791 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:33:40,651 DEBUG [Listener at localhost/45661] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58093b7b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:33:40,674 DEBUG [hconnection-0x2ad2b5e2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:33:40,692 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:50078, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:33:40,704 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:33:40,706 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:40,718 DEBUG [Listener at localhost/45661] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-11 15:33:40,722 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:55202, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-11 15:33:40,740 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-11 15:33:40,740 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:33:40,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=false 2023-07-11 15:33:40,747 DEBUG [Listener at localhost/45661] zookeeper.ReadOnlyZKClient(139): Connect 0x4d60c024 to 127.0.0.1:49791 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:33:40,777 DEBUG [Listener at localhost/45661] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5caafa9d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:33:40,778 INFO [Listener at localhost/45661] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:49791 2023-07-11 15:33:40,788 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:33:40,789 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10154f6e260000a connected 2023-07-11 15:33:40,841 INFO [Listener at localhost/45661] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=419, OpenFileDescriptor=684, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=178, AvailableMemoryMB=7333 2023-07-11 15:33:40,845 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-11 15:33:40,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:40,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:40,940 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-11 15:33:40,952 INFO [Listener at localhost/45661] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:33:40,953 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:40,953 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:40,953 INFO [Listener at localhost/45661] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:33:40,953 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:33:40,954 INFO [Listener at localhost/45661] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:33:40,954 INFO [Listener at localhost/45661] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:33:40,959 INFO [Listener at localhost/45661] ipc.NettyRpcServer(120): Bind to /172.31.2.10:45349 2023-07-11 15:33:40,959 INFO [Listener at localhost/45661] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:33:40,965 DEBUG [Listener at localhost/45661] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:33:40,967 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:40,973 INFO [Listener at localhost/45661] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:33:40,976 INFO [Listener at localhost/45661] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45349 connecting to ZooKeeper ensemble=127.0.0.1:49791 2023-07-11 15:33:40,984 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:453490x0, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:33:40,986 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(162): regionserver:453490x0, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 15:33:40,987 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(162): regionserver:453490x0, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-11 15:33:40,988 DEBUG [Listener at localhost/45661] zookeeper.ZKUtil(164): regionserver:453490x0, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:33:40,990 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45349-0x10154f6e260000b connected 2023-07-11 15:33:40,994 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45349 2023-07-11 15:33:40,997 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45349 2023-07-11 15:33:40,998 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45349 2023-07-11 15:33:41,005 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45349 2023-07-11 15:33:41,005 DEBUG [Listener at localhost/45661] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45349 2023-07-11 15:33:41,008 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:33:41,008 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:33:41,008 INFO [Listener at localhost/45661] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:33:41,009 INFO [Listener at localhost/45661] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:33:41,009 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:33:41,010 INFO [Listener at localhost/45661] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:33:41,010 INFO [Listener at localhost/45661] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:33:41,010 INFO [Listener at localhost/45661] http.HttpServer(1146): Jetty bound to port 35705 2023-07-11 15:33:41,011 INFO [Listener at localhost/45661] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:33:41,014 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:41,015 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b8a7a95{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:33:41,015 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:41,016 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@30e55351{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:33:41,182 INFO [Listener at localhost/45661] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:33:41,184 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:33:41,184 INFO [Listener at localhost/45661] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:33:41,184 INFO [Listener at localhost/45661] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:33:41,186 INFO [Listener at localhost/45661] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:33:41,187 INFO [Listener at localhost/45661] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@dfc4cbd{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/java.io.tmpdir/jetty-0_0_0_0-35705-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8863214025598320982/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:33:41,190 INFO [Listener at localhost/45661] server.AbstractConnector(333): Started ServerConnector@4f8a6a12{HTTP/1.1, (http/1.1)}{0.0.0.0:35705} 2023-07-11 15:33:41,190 INFO [Listener at localhost/45661] server.Server(415): Started @12684ms 2023-07-11 15:33:41,204 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(951): ClusterId : 3d7174ae-a67c-4b60-9240-0a3c0cf3051d 2023-07-11 15:33:41,212 DEBUG [RS:3;jenkins-hbase9:45349] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:33:41,223 DEBUG [RS:3;jenkins-hbase9:45349] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:33:41,223 DEBUG [RS:3;jenkins-hbase9:45349] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:33:41,225 DEBUG [RS:3;jenkins-hbase9:45349] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:33:41,237 DEBUG [RS:3;jenkins-hbase9:45349] zookeeper.ReadOnlyZKClient(139): Connect 0x0bc50a61 to 127.0.0.1:49791 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:33:41,268 DEBUG [RS:3;jenkins-hbase9:45349] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@502e569, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:33:41,268 DEBUG [RS:3;jenkins-hbase9:45349] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13d6111, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:33:41,278 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase9:45349 2023-07-11 15:33:41,278 INFO [RS:3;jenkins-hbase9:45349] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:33:41,278 INFO [RS:3;jenkins-hbase9:45349] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:33:41,278 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:33:41,279 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,44179,1689089614389 with isa=jenkins-hbase9.apache.org/172.31.2.10:45349, startcode=1689089620952 2023-07-11 15:33:41,279 DEBUG [RS:3;jenkins-hbase9:45349] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:33:41,284 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:57681, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:33:41,285 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44179] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:41,285 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:33:41,286 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c 2023-07-11 15:33:41,286 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43853 2023-07-11 15:33:41,286 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33091 2023-07-11 15:33:41,291 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:33:41,292 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:33:41,291 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:33:41,291 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:33:41,293 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,45349,1689089620952] 2023-07-11 15:33:41,294 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:41,294 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:41,294 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:41,294 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:41,294 DEBUG [RS:3;jenkins-hbase9:45349] zookeeper.ZKUtil(162): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:41,294 WARN [RS:3;jenkins-hbase9:45349] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:33:41,294 INFO [RS:3;jenkins-hbase9:45349] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:33:41,295 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:41,295 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:41,295 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:41,295 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:41,295 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 15:33:41,296 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:41,296 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:41,296 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:41,308 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:41,308 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:41,309 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,44179,1689089614389] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-11 15:33:41,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:41,312 DEBUG [RS:3;jenkins-hbase9:45349] zookeeper.ZKUtil(162): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:41,313 DEBUG [RS:3;jenkins-hbase9:45349] zookeeper.ZKUtil(162): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:41,313 DEBUG [RS:3;jenkins-hbase9:45349] zookeeper.ZKUtil(162): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:41,314 DEBUG [RS:3;jenkins-hbase9:45349] zookeeper.ZKUtil(162): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:41,315 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:33:41,315 INFO [RS:3;jenkins-hbase9:45349] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:33:41,322 INFO [RS:3;jenkins-hbase9:45349] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:33:41,337 INFO [RS:3;jenkins-hbase9:45349] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:33:41,337 INFO [RS:3;jenkins-hbase9:45349] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:41,341 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:33:41,343 INFO [RS:3;jenkins-hbase9:45349] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:41,344 DEBUG [RS:3;jenkins-hbase9:45349] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:33:41,349 INFO [RS:3;jenkins-hbase9:45349] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:41,350 INFO [RS:3;jenkins-hbase9:45349] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:41,350 INFO [RS:3;jenkins-hbase9:45349] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:41,371 INFO [RS:3;jenkins-hbase9:45349] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:33:41,371 INFO [RS:3;jenkins-hbase9:45349] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,45349,1689089620952-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:33:41,387 INFO [RS:3;jenkins-hbase9:45349] regionserver.Replication(203): jenkins-hbase9.apache.org,45349,1689089620952 started 2023-07-11 15:33:41,388 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,45349,1689089620952, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:45349, sessionid=0x10154f6e260000b 2023-07-11 15:33:41,388 DEBUG [RS:3;jenkins-hbase9:45349] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:33:41,388 DEBUG [RS:3;jenkins-hbase9:45349] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:41,388 DEBUG [RS:3;jenkins-hbase9:45349] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,45349,1689089620952' 2023-07-11 15:33:41,388 DEBUG [RS:3;jenkins-hbase9:45349] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:33:41,389 DEBUG [RS:3;jenkins-hbase9:45349] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:33:41,389 DEBUG [RS:3;jenkins-hbase9:45349] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:33:41,389 DEBUG [RS:3;jenkins-hbase9:45349] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:33:41,389 DEBUG [RS:3;jenkins-hbase9:45349] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:41,389 DEBUG [RS:3;jenkins-hbase9:45349] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,45349,1689089620952' 2023-07-11 15:33:41,390 DEBUG [RS:3;jenkins-hbase9:45349] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:33:41,390 DEBUG [RS:3;jenkins-hbase9:45349] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:33:41,391 DEBUG [RS:3;jenkins-hbase9:45349] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:33:41,391 INFO [RS:3;jenkins-hbase9:45349] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 15:33:41,391 INFO [RS:3;jenkins-hbase9:45349] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 15:33:41,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:41,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:41,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:41,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:41,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:41,412 DEBUG [hconnection-0x1705aebc-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:33:41,415 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47548, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:33:41,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:41,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:41,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:41,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:41,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:55202 deadline: 1689090821433, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:41,436 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:41,438 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:41,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:41,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:41,440 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:41,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:41,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:41,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:41,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:41,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:41,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:41,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:41,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:41,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:41,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:41,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:41,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:41,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495] to rsgroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:41,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:41,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:41,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:41,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:41,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(238): Moving server region 48c2bf7782ee61fbc67dfe7aa5f38abc, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:41,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:41,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:41,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:41,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:41,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:41,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=48c2bf7782ee61fbc67dfe7aa5f38abc, REOPEN/MOVE 2023-07-11 15:33:41,480 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=48c2bf7782ee61fbc67dfe7aa5f38abc, REOPEN/MOVE 2023-07-11 15:33:41,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(238): Moving server region db11ce5f2f749a24653755c2ee31ecfe, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:41,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:41,482 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=48c2bf7782ee61fbc67dfe7aa5f38abc, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:41,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:41,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:41,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:41,482 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089621482"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089621482"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089621482"}]},"ts":"1689089621482"} 2023-07-11 15:33:41,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:41,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, REOPEN/MOVE 2023-07-11 15:33:41,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:41,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:41,485 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, REOPEN/MOVE 2023-07-11 15:33:41,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:41,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:41,486 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:41,487 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089621486"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089621486"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089621486"}]},"ts":"1689089621486"} 2023-07-11 15:33:41,486 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 48c2bf7782ee61fbc67dfe7aa5f38abc, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:41,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:41,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:41,491 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:41,494 INFO [RS:3;jenkins-hbase9:45349] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C45349%2C1689089620952, suffix=, logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,45349,1689089620952, archiveDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs, maxLogs=32 2023-07-11 15:33:41,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-11 15:33:41,497 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-11 15:33:41,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-11 15:33:41,504 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,42495,1689089616669, state=CLOSING 2023-07-11 15:33:41,507 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 15:33:41,507 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 15:33:41,507 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:41,527 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK] 2023-07-11 15:33:41,530 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK] 2023-07-11 15:33:41,532 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK] 2023-07-11 15:33:41,545 INFO [RS:3;jenkins-hbase9:45349] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,45349,1689089620952/jenkins-hbase9.apache.org%2C45349%2C1689089620952.1689089621496 2023-07-11 15:33:41,545 DEBUG [RS:3;jenkins-hbase9:45349] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK], DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK], DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK]] 2023-07-11 15:33:41,661 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-11 15:33:41,661 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:41,662 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 15:33:41,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing db11ce5f2f749a24653755c2ee31ecfe, disabling compactions & flushes 2023-07-11 15:33:41,663 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 15:33:41,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:41,663 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 15:33:41,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:41,663 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 15:33:41,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. after waiting 0 ms 2023-07-11 15:33:41,663 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 15:33:41,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:41,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing db11ce5f2f749a24653755c2ee31ecfe 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-11 15:33:41,664 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.21 KB heapSize=6.16 KB 2023-07-11 15:33:41,782 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.03 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/info/45b9fb0fc7a9462bad6657f05ec114f2 2023-07-11 15:33:41,796 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/.tmp/info/8f2aba012e7641a099d18b10060f2fd8 2023-07-11 15:33:41,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/.tmp/info/8f2aba012e7641a099d18b10060f2fd8 as hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info/8f2aba012e7641a099d18b10060f2fd8 2023-07-11 15:33:41,879 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info/8f2aba012e7641a099d18b10060f2fd8, entries=2, sequenceid=6, filesize=4.8 K 2023-07-11 15:33:41,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for db11ce5f2f749a24653755c2ee31ecfe in 219ms, sequenceid=6, compaction requested=false 2023-07-11 15:33:41,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-11 15:33:41,967 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/table/4a82b29a4b324e7ea9979d1d60cac0aa 2023-07-11 15:33:41,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-11 15:33:41,983 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:41,983 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for db11ce5f2f749a24653755c2ee31ecfe: 2023-07-11 15:33:41,983 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding db11ce5f2f749a24653755c2ee31ecfe move to jenkins-hbase9.apache.org,43957,1689089616370 record at close sequenceid=6 2023-07-11 15:33:41,987 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:41,987 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:41,987 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:41,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 48c2bf7782ee61fbc67dfe7aa5f38abc, disabling compactions & flushes 2023-07-11 15:33:41,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:41,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:41,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. after waiting 0 ms 2023-07-11 15:33:41,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:41,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 48c2bf7782ee61fbc67dfe7aa5f38abc 1/1 column families, dataSize=1.38 KB heapSize=2.37 KB 2023-07-11 15:33:41,992 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/info/45b9fb0fc7a9462bad6657f05ec114f2 as hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info/45b9fb0fc7a9462bad6657f05ec114f2 2023-07-11 15:33:42,006 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info/45b9fb0fc7a9462bad6657f05ec114f2, entries=22, sequenceid=16, filesize=7.3 K 2023-07-11 15:33:42,012 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/table/4a82b29a4b324e7ea9979d1d60cac0aa as hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table/4a82b29a4b324e7ea9979d1d60cac0aa 2023-07-11 15:33:42,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/.tmp/m/e907d3b1b91346dcbedfa74bd8de91ba 2023-07-11 15:33:42,043 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table/4a82b29a4b324e7ea9979d1d60cac0aa, entries=4, sequenceid=16, filesize=4.8 K 2023-07-11 15:33:42,046 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.21 KB/3290, heapSize ~5.88 KB/6024, currentSize=0 B/0 for 1588230740 in 382ms, sequenceid=16, compaction requested=false 2023-07-11 15:33:42,046 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-11 15:33:42,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/.tmp/m/e907d3b1b91346dcbedfa74bd8de91ba as hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/m/e907d3b1b91346dcbedfa74bd8de91ba 2023-07-11 15:33:42,065 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/recovered.edits/19.seqid, newMaxSeqId=19, maxSeqId=1 2023-07-11 15:33:42,066 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:33:42,066 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 15:33:42,066 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 15:33:42,066 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase9.apache.org,45349,1689089620952 record at close sequenceid=16 2023-07-11 15:33:42,069 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-11 15:33:42,070 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-11 15:33:42,074 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/m/e907d3b1b91346dcbedfa74bd8de91ba, entries=3, sequenceid=9, filesize=5.2 K 2023-07-11 15:33:42,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1418, heapSize ~2.35 KB/2408, currentSize=0 B/0 for 48c2bf7782ee61fbc67dfe7aa5f38abc in 91ms, sequenceid=9, compaction requested=false 2023-07-11 15:33:42,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-11 15:33:42,079 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-11 15:33:42,080 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,42495,1689089616669 in 563 msec 2023-07-11 15:33:42,081 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:42,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-11 15:33:42,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:33:42,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:42,091 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 48c2bf7782ee61fbc67dfe7aa5f38abc: 2023-07-11 15:33:42,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 48c2bf7782ee61fbc67dfe7aa5f38abc move to jenkins-hbase9.apache.org,45349,1689089620952 record at close sequenceid=9 2023-07-11 15:33:42,093 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 48c2bf7782ee61fbc67dfe7aa5f38abc, server=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:42,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:42,231 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:33:42,231 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,45349,1689089620952, state=OPENING 2023-07-11 15:33:42,234 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 15:33:42,234 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:42,234 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 15:33:42,389 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:42,389 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:33:42,392 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58744, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:33:42,400 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-11 15:33:42,400 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:33:42,403 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C45349%2C1689089620952.meta, suffix=.meta, logDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,45349,1689089620952, archiveDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs, maxLogs=32 2023-07-11 15:33:42,427 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK] 2023-07-11 15:33:42,428 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK] 2023-07-11 15:33:42,427 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK] 2023-07-11 15:33:42,434 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,45349,1689089620952/jenkins-hbase9.apache.org%2C45349%2C1689089620952.meta.1689089622404.meta 2023-07-11 15:33:42,434 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39613,DS-8b2f8a51-5c15-4cca-a077-9b1a082be76a,DISK], DatanodeInfoWithStorage[127.0.0.1:34437,DS-e0c8503c-ff53-4f9c-864a-7bd24a64524b,DISK], DatanodeInfoWithStorage[127.0.0.1:45239,DS-4e5d4ef5-e767-4e4e-82b3-a2fb2c9686cc,DISK]] 2023-07-11 15:33:42,434 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:42,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 15:33:42,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-11 15:33:42,435 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-11 15:33:42,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-11 15:33:42,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:42,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-11 15:33:42,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-11 15:33:42,438 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 15:33:42,439 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info 2023-07-11 15:33:42,439 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info 2023-07-11 15:33:42,439 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 15:33:42,456 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info/45b9fb0fc7a9462bad6657f05ec114f2 2023-07-11 15:33:42,457 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:42,457 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 15:33:42,459 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:33:42,459 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:33:42,460 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 15:33:42,460 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:42,461 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 15:33:42,462 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table 2023-07-11 15:33:42,462 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table 2023-07-11 15:33:42,463 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 15:33:42,482 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table/4a82b29a4b324e7ea9979d1d60cac0aa 2023-07-11 15:33:42,482 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:42,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740 2023-07-11 15:33:42,486 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740 2023-07-11 15:33:42,490 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 15:33:42,496 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 15:33:42,497 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=20; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10225599040, jitterRate=-0.04766687750816345}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 15:33:42,498 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 15:33:42,499 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=18, masterSystemTime=1689089622389 2023-07-11 15:33:42,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-11 15:33:42,504 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-11 15:33:42,504 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-11 15:33:42,505 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,45349,1689089620952, state=OPEN 2023-07-11 15:33:42,507 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 15:33:42,507 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 15:33:42,508 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=CLOSED 2023-07-11 15:33:42,508 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=48c2bf7782ee61fbc67dfe7aa5f38abc, regionState=CLOSED 2023-07-11 15:33:42,508 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089622508"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089622508"}]},"ts":"1689089622508"} 2023-07-11 15:33:42,509 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089622508"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089622508"}]},"ts":"1689089622508"} 2023-07-11 15:33:42,509 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42495] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Mutate size: 217 connection: 172.31.2.10:50064 deadline: 1689089682509, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=45349 startCode=1689089620952. As of locationSeqNum=16. 2023-07-11 15:33:42,510 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42495] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 213 connection: 172.31.2.10:50064 deadline: 1689089682509, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=45349 startCode=1689089620952. As of locationSeqNum=16. 2023-07-11 15:33:42,511 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-07-11 15:33:42,511 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,45349,1689089620952 in 273 msec 2023-07-11 15:33:42,515 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 1.0210 sec 2023-07-11 15:33:42,611 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:33:42,613 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58748, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:33:42,624 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-11 15:33:42,625 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; CloseRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,42495,1689089616669 in 1.1270 sec 2023-07-11 15:33:42,627 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:33:42,628 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-11 15:33:42,629 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure 48c2bf7782ee61fbc67dfe7aa5f38abc, server=jenkins-hbase9.apache.org,42495,1689089616669 in 1.1370 sec 2023-07-11 15:33:42,630 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=48c2bf7782ee61fbc67dfe7aa5f38abc, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:42,631 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-11 15:33:42,631 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:42,632 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089622631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089622631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089622631"}]},"ts":"1689089622631"} 2023-07-11 15:33:42,633 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=48c2bf7782ee61fbc67dfe7aa5f38abc, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:42,633 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089622633"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089622633"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089622633"}]},"ts":"1689089622633"} 2023-07-11 15:33:42,635 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=13, state=RUNNABLE; OpenRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:42,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=12, state=RUNNABLE; OpenRegionProcedure 48c2bf7782ee61fbc67dfe7aa5f38abc, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:42,790 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:42,790 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:33:42,794 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:41610, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:33:42,799 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:42,799 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => db11ce5f2f749a24653755c2ee31ecfe, NAME => 'hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:42,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:42,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:42,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:42,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:42,813 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:42,813 INFO [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:42,814 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 48c2bf7782ee61fbc67dfe7aa5f38abc, NAME => 'hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:42,814 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 15:33:42,814 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. service=MultiRowMutationService 2023-07-11 15:33:42,814 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-11 15:33:42,814 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:42,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:42,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:42,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:42,816 DEBUG [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info 2023-07-11 15:33:42,816 DEBUG [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info 2023-07-11 15:33:42,817 INFO [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:42,817 INFO [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region db11ce5f2f749a24653755c2ee31ecfe columnFamilyName info 2023-07-11 15:33:42,818 DEBUG [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/m 2023-07-11 15:33:42,819 DEBUG [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/m 2023-07-11 15:33:42,819 INFO [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 48c2bf7782ee61fbc67dfe7aa5f38abc columnFamilyName m 2023-07-11 15:33:42,831 DEBUG [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] regionserver.HStore(539): loaded hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info/8f2aba012e7641a099d18b10060f2fd8 2023-07-11 15:33:42,832 INFO [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] regionserver.HStore(310): Store=db11ce5f2f749a24653755c2ee31ecfe/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:42,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:42,834 DEBUG [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] regionserver.HStore(539): loaded hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/m/e907d3b1b91346dcbedfa74bd8de91ba 2023-07-11 15:33:42,834 INFO [StoreOpener-48c2bf7782ee61fbc67dfe7aa5f38abc-1] regionserver.HStore(310): Store=48c2bf7782ee61fbc67dfe7aa5f38abc/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:42,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:42,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:42,839 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:42,843 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:42,844 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:33:42,845 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened db11ce5f2f749a24653755c2ee31ecfe; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11249497280, jitterRate=0.04769107699394226}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:42,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for db11ce5f2f749a24653755c2ee31ecfe: 2023-07-11 15:33:42,845 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 48c2bf7782ee61fbc67dfe7aa5f38abc; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@736a6957, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:42,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 48c2bf7782ee61fbc67dfe7aa5f38abc: 2023-07-11 15:33:42,847 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc., pid=20, masterSystemTime=1689089622789 2023-07-11 15:33:42,847 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe., pid=19, masterSystemTime=1689089622789 2023-07-11 15:33:42,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:42,852 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:42,854 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:42,854 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:33:42,854 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:42,855 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089622854"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089622854"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089622854"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089622854"}]},"ts":"1689089622854"} 2023-07-11 15:33:42,855 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=48c2bf7782ee61fbc67dfe7aa5f38abc, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:42,856 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089622855"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089622855"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089622855"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089622855"}]},"ts":"1689089622855"} 2023-07-11 15:33:42,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=13 2023-07-11 15:33:42,864 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=13, state=SUCCESS; OpenRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,43957,1689089616370 in 222 msec 2023-07-11 15:33:42,865 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=12 2023-07-11 15:33:42,865 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=12, state=SUCCESS; OpenRegionProcedure 48c2bf7782ee61fbc67dfe7aa5f38abc, server=jenkins-hbase9.apache.org,45349,1689089620952 in 223 msec 2023-07-11 15:33:42,872 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, REOPEN/MOVE in 1.3790 sec 2023-07-11 15:33:42,874 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=48c2bf7782ee61fbc67dfe7aa5f38abc, REOPEN/MOVE in 1.3870 sec 2023-07-11 15:33:43,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669] are moved back to default 2023-07-11 15:33:43,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:43,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:43,505 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42495] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.2.10:47548 deadline: 1689089683504, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=45349 startCode=1689089620952. As of locationSeqNum=9. 2023-07-11 15:33:43,608 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42495] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:47548 deadline: 1689089683608, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=45349 startCode=1689089620952. As of locationSeqNum=16. 2023-07-11 15:33:43,710 DEBUG [hconnection-0x1705aebc-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:33:43,715 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58758, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:33:43,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:43,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:43,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:43,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:43,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:43,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:43,759 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:33:43,762 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42495] ipc.CallRunner(144): callId: 50 service: ClientService methodName: ExecService size: 622 connection: 172.31.2.10:50064 deadline: 1689089683761, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=45349 startCode=1689089620952. As of locationSeqNum=9. 2023-07-11 15:33:43,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 21 2023-07-11 15:33:43,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-11 15:33:43,869 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:43,869 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:43,870 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:43,870 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:43,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-11 15:33:43,880 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:33:43,887 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:43,888 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:43,888 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:43,888 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941 empty. 2023-07-11 15:33:43,888 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:43,889 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35 empty. 2023-07-11 15:33:43,893 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:43,893 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:43,893 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8 empty. 2023-07-11 15:33:43,893 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:43,894 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618 empty. 2023-07-11 15:33:43,894 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:43,895 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:43,895 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331 empty. 2023-07-11 15:33:43,900 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:43,901 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-11 15:33:43,947 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:43,954 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 61d773bf64cca2abfaf2347b17a50a35, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:43,954 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => b1236b49cb63808d6f804e5665941941, NAME => 'Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:43,954 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => f6a5a51ec794ea32a8ed5c0d2b67a618, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:43,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:43,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 61d773bf64cca2abfaf2347b17a50a35, disabling compactions & flushes 2023-07-11 15:33:43,999 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:43,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:43,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. after waiting 0 ms 2023-07-11 15:33:43,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:44,000 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:44,000 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 61d773bf64cca2abfaf2347b17a50a35: 2023-07-11 15:33:44,001 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 417827834f8a6ff7deacc3ace80a18b8, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:44,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:44,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing f6a5a51ec794ea32a8ed5c0d2b67a618, disabling compactions & flushes 2023-07-11 15:33:44,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:44,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:44,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. after waiting 0 ms 2023-07-11 15:33:44,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:44,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:44,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:44,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing b1236b49cb63808d6f804e5665941941, disabling compactions & flushes 2023-07-11 15:33:44,008 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:44,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:44,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. after waiting 0 ms 2023-07-11 15:33:44,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:44,008 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:44,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for b1236b49cb63808d6f804e5665941941: 2023-07-11 15:33:44,008 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => e9a0df4772b35ec85dde21e5af054331, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:44,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for f6a5a51ec794ea32a8ed5c0d2b67a618: 2023-07-11 15:33:44,033 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:44,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 417827834f8a6ff7deacc3ace80a18b8, disabling compactions & flushes 2023-07-11 15:33:44,034 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:44,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:44,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. after waiting 0 ms 2023-07-11 15:33:44,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:44,034 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:44,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 417827834f8a6ff7deacc3ace80a18b8: 2023-07-11 15:33:44,037 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:44,037 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing e9a0df4772b35ec85dde21e5af054331, disabling compactions & flushes 2023-07-11 15:33:44,037 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:44,037 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:44,037 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. after waiting 0 ms 2023-07-11 15:33:44,037 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:44,037 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:44,037 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for e9a0df4772b35ec85dde21e5af054331: 2023-07-11 15:33:44,041 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:33:44,043 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089624042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089624042"}]},"ts":"1689089624042"} 2023-07-11 15:33:44,043 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089624042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089624042"}]},"ts":"1689089624042"} 2023-07-11 15:33:44,043 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089624042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089624042"}]},"ts":"1689089624042"} 2023-07-11 15:33:44,043 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089624042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089624042"}]},"ts":"1689089624042"} 2023-07-11 15:33:44,044 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089624042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089624042"}]},"ts":"1689089624042"} 2023-07-11 15:33:44,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-11 15:33:44,096 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-11 15:33:44,098 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:33:44,099 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089624098"}]},"ts":"1689089624098"} 2023-07-11 15:33:44,101 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-11 15:33:44,105 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:44,105 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:44,106 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:44,106 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:44,106 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, ASSIGN}] 2023-07-11 15:33:44,110 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, ASSIGN 2023-07-11 15:33:44,110 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, ASSIGN 2023-07-11 15:33:44,112 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, ASSIGN 2023-07-11 15:33:44,112 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, ASSIGN 2023-07-11 15:33:44,113 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, ASSIGN 2023-07-11 15:33:44,113 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:44,114 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:33:44,114 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:44,114 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:33:44,115 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:44,264 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-11 15:33:44,268 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=f6a5a51ec794ea32a8ed5c0d2b67a618, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:44,268 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=61d773bf64cca2abfaf2347b17a50a35, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:44,268 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=e9a0df4772b35ec85dde21e5af054331, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:44,269 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089624268"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089624268"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089624268"}]},"ts":"1689089624268"} 2023-07-11 15:33:44,269 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=417827834f8a6ff7deacc3ace80a18b8, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:44,269 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089624269"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089624269"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089624269"}]},"ts":"1689089624269"} 2023-07-11 15:33:44,269 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089624268"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089624268"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089624268"}]},"ts":"1689089624268"} 2023-07-11 15:33:44,269 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b1236b49cb63808d6f804e5665941941, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:44,270 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089624269"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089624269"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089624269"}]},"ts":"1689089624269"} 2023-07-11 15:33:44,269 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089624268"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089624268"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089624268"}]},"ts":"1689089624268"} 2023-07-11 15:33:44,272 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=24, state=RUNNABLE; OpenRegionProcedure f6a5a51ec794ea32a8ed5c0d2b67a618, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:44,276 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=25, state=RUNNABLE; OpenRegionProcedure 417827834f8a6ff7deacc3ace80a18b8, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:44,276 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=23, state=RUNNABLE; OpenRegionProcedure 61d773bf64cca2abfaf2347b17a50a35, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:44,278 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=22, state=RUNNABLE; OpenRegionProcedure b1236b49cb63808d6f804e5665941941, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:44,280 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=26, state=RUNNABLE; OpenRegionProcedure e9a0df4772b35ec85dde21e5af054331, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:44,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-11 15:33:44,446 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:44,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e9a0df4772b35ec85dde21e5af054331, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-11 15:33:44,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:44,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:44,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:44,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:44,456 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:44,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 417827834f8a6ff7deacc3ace80a18b8, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-11 15:33:44,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:44,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:44,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:44,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:44,464 INFO [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:44,487 INFO [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:44,490 DEBUG [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/f 2023-07-11 15:33:44,491 DEBUG [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/f 2023-07-11 15:33:44,491 INFO [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e9a0df4772b35ec85dde21e5af054331 columnFamilyName f 2023-07-11 15:33:44,492 INFO [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] regionserver.HStore(310): Store=e9a0df4772b35ec85dde21e5af054331/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:44,493 DEBUG [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/f 2023-07-11 15:33:44,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:44,495 DEBUG [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/f 2023-07-11 15:33:44,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:44,502 INFO [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 417827834f8a6ff7deacc3ace80a18b8 columnFamilyName f 2023-07-11 15:33:44,507 INFO [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] regionserver.HStore(310): Store=417827834f8a6ff7deacc3ace80a18b8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:44,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:44,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:44,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:44,521 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:44,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:44,522 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e9a0df4772b35ec85dde21e5af054331; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11157138880, jitterRate=0.03908953070640564}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:44,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e9a0df4772b35ec85dde21e5af054331: 2023-07-11 15:33:44,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331., pid=31, masterSystemTime=1689089624430 2023-07-11 15:33:44,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:44,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:44,527 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:44,527 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:44,527 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 417827834f8a6ff7deacc3ace80a18b8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10968781600, jitterRate=0.021547392010688782}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:44,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 417827834f8a6ff7deacc3ace80a18b8: 2023-07-11 15:33:44,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61d773bf64cca2abfaf2347b17a50a35, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-11 15:33:44,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:44,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:44,528 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=e9a0df4772b35ec85dde21e5af054331, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:44,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:44,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:44,528 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089624528"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089624528"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089624528"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089624528"}]},"ts":"1689089624528"} 2023-07-11 15:33:44,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8., pid=28, masterSystemTime=1689089624431 2023-07-11 15:33:44,530 INFO [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:44,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:44,535 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:44,535 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:44,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b1236b49cb63808d6f804e5665941941, NAME => 'Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-11 15:33:44,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:44,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:44,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:44,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:44,536 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=417827834f8a6ff7deacc3ace80a18b8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:44,536 DEBUG [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/f 2023-07-11 15:33:44,536 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089624536"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089624536"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089624536"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089624536"}]},"ts":"1689089624536"} 2023-07-11 15:33:44,536 DEBUG [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/f 2023-07-11 15:33:44,536 INFO [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61d773bf64cca2abfaf2347b17a50a35 columnFamilyName f 2023-07-11 15:33:44,537 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=26 2023-07-11 15:33:44,538 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=26, state=SUCCESS; OpenRegionProcedure e9a0df4772b35ec85dde21e5af054331, server=jenkins-hbase9.apache.org,45349,1689089620952 in 254 msec 2023-07-11 15:33:44,539 INFO [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] regionserver.HStore(310): Store=61d773bf64cca2abfaf2347b17a50a35/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:44,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:44,541 INFO [StoreOpener-b1236b49cb63808d6f804e5665941941-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:44,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:44,541 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, ASSIGN in 432 msec 2023-07-11 15:33:44,543 DEBUG [StoreOpener-b1236b49cb63808d6f804e5665941941-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/f 2023-07-11 15:33:44,543 DEBUG [StoreOpener-b1236b49cb63808d6f804e5665941941-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/f 2023-07-11 15:33:44,544 INFO [StoreOpener-b1236b49cb63808d6f804e5665941941-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b1236b49cb63808d6f804e5665941941 columnFamilyName f 2023-07-11 15:33:44,545 INFO [StoreOpener-b1236b49cb63808d6f804e5665941941-1] regionserver.HStore(310): Store=b1236b49cb63808d6f804e5665941941/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:44,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:44,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=25 2023-07-11 15:33:44,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=25, state=SUCCESS; OpenRegionProcedure 417827834f8a6ff7deacc3ace80a18b8, server=jenkins-hbase9.apache.org,43957,1689089616370 in 264 msec 2023-07-11 15:33:44,550 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:44,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:44,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:44,553 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, ASSIGN in 444 msec 2023-07-11 15:33:44,553 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 61d773bf64cca2abfaf2347b17a50a35; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10061374880, jitterRate=-0.06296144425868988}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:44,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 61d773bf64cca2abfaf2347b17a50a35: 2023-07-11 15:33:44,554 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35., pid=29, masterSystemTime=1689089624430 2023-07-11 15:33:44,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:44,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:44,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:44,558 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:44,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6a5a51ec794ea32a8ed5c0d2b67a618, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-11 15:33:44,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:44,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:44,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:44,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:44,559 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=61d773bf64cca2abfaf2347b17a50a35, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:44,560 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089624559"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089624559"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089624559"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089624559"}]},"ts":"1689089624559"} 2023-07-11 15:33:44,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:44,563 INFO [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:44,567 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened b1236b49cb63808d6f804e5665941941; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11137309280, jitterRate=0.03724275529384613}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:44,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for b1236b49cb63808d6f804e5665941941: 2023-07-11 15:33:44,568 DEBUG [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/f 2023-07-11 15:33:44,568 DEBUG [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/f 2023-07-11 15:33:44,569 INFO [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6a5a51ec794ea32a8ed5c0d2b67a618 columnFamilyName f 2023-07-11 15:33:44,569 INFO [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] regionserver.HStore(310): Store=f6a5a51ec794ea32a8ed5c0d2b67a618/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:44,570 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:44,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:44,574 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941., pid=30, masterSystemTime=1689089624431 2023-07-11 15:33:44,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:44,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:44,577 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:44,577 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=23 2023-07-11 15:33:44,577 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=23, state=SUCCESS; OpenRegionProcedure 61d773bf64cca2abfaf2347b17a50a35, server=jenkins-hbase9.apache.org,45349,1689089620952 in 291 msec 2023-07-11 15:33:44,578 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b1236b49cb63808d6f804e5665941941, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:44,579 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089624578"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089624578"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089624578"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089624578"}]},"ts":"1689089624578"} 2023-07-11 15:33:44,581 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, ASSIGN in 472 msec 2023-07-11 15:33:44,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:44,585 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened f6a5a51ec794ea32a8ed5c0d2b67a618; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11073941440, jitterRate=0.03134116530418396}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:44,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for f6a5a51ec794ea32a8ed5c0d2b67a618: 2023-07-11 15:33:44,587 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618., pid=27, masterSystemTime=1689089624430 2023-07-11 15:33:44,600 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=22 2023-07-11 15:33:44,600 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=22, state=SUCCESS; OpenRegionProcedure b1236b49cb63808d6f804e5665941941, server=jenkins-hbase9.apache.org,43957,1689089616370 in 306 msec 2023-07-11 15:33:44,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:44,601 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:44,603 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=f6a5a51ec794ea32a8ed5c0d2b67a618, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:44,603 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089624603"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089624603"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089624603"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089624603"}]},"ts":"1689089624603"} 2023-07-11 15:33:44,605 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, ASSIGN in 494 msec 2023-07-11 15:33:44,609 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=24 2023-07-11 15:33:44,609 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=24, state=SUCCESS; OpenRegionProcedure f6a5a51ec794ea32a8ed5c0d2b67a618, server=jenkins-hbase9.apache.org,45349,1689089620952 in 334 msec 2023-07-11 15:33:44,613 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=21 2023-07-11 15:33:44,613 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:33:44,613 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089624613"}]},"ts":"1689089624613"} 2023-07-11 15:33:44,614 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, ASSIGN in 503 msec 2023-07-11 15:33:44,615 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-11 15:33:44,618 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:33:44,621 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 863 msec 2023-07-11 15:33:44,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-11 15:33:44,886 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 21 completed 2023-07-11 15:33:44,886 DEBUG [Listener at localhost/45661] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-11 15:33:44,888 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:44,889 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42495] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.2.10:50078 deadline: 1689089684889, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=45349 startCode=1689089620952. As of locationSeqNum=16. 2023-07-11 15:33:44,992 DEBUG [hconnection-0x2ad2b5e2-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:33:44,996 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58770, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:33:45,008 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-11 15:33:45,009 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:45,009 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-11 15:33:45,010 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:45,016 DEBUG [Listener at localhost/45661] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:33:45,017 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-11 15:33:45,018 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:39130, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:33:45,021 DEBUG [Listener at localhost/45661] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:33:45,025 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47564, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:33:45,026 DEBUG [Listener at localhost/45661] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:33:45,030 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:41626, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:33:45,032 DEBUG [Listener at localhost/45661] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:33:45,033 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:58776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:33:45,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:45,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:33:45,046 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:45,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:45,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:45,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:45,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:45,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:45,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:45,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region b1236b49cb63808d6f804e5665941941 to RSGroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:45,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:45,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:45,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:45,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:45,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:45,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, REOPEN/MOVE 2023-07-11 15:33:45,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region 61d773bf64cca2abfaf2347b17a50a35 to RSGroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:45,067 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, REOPEN/MOVE 2023-07-11 15:33:45,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:45,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:45,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:45,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:45,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:45,069 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b1236b49cb63808d6f804e5665941941, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:45,069 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089625069"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625069"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625069"}]},"ts":"1689089625069"} 2023-07-11 15:33:45,072 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=32, state=RUNNABLE; CloseRegionProcedure b1236b49cb63808d6f804e5665941941, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:45,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, REOPEN/MOVE 2023-07-11 15:33:45,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region f6a5a51ec794ea32a8ed5c0d2b67a618 to RSGroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:45,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:45,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:45,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:45,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:45,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:45,089 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, REOPEN/MOVE 2023-07-11 15:33:45,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, REOPEN/MOVE 2023-07-11 15:33:45,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region 417827834f8a6ff7deacc3ace80a18b8 to RSGroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:45,091 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=61d773bf64cca2abfaf2347b17a50a35, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:45,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:45,093 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625091"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625091"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625091"}]},"ts":"1689089625091"} 2023-07-11 15:33:45,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:45,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:45,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:45,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:45,096 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; CloseRegionProcedure 61d773bf64cca2abfaf2347b17a50a35, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:45,092 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, REOPEN/MOVE 2023-07-11 15:33:45,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, REOPEN/MOVE 2023-07-11 15:33:45,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region e9a0df4772b35ec85dde21e5af054331 to RSGroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:45,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:45,107 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, REOPEN/MOVE 2023-07-11 15:33:45,107 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=f6a5a51ec794ea32a8ed5c0d2b67a618, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:45,107 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625107"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625107"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625107"}]},"ts":"1689089625107"} 2023-07-11 15:33:45,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:45,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:45,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:45,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:45,110 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=35, state=RUNNABLE; CloseRegionProcedure f6a5a51ec794ea32a8ed5c0d2b67a618, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:45,112 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=417827834f8a6ff7deacc3ace80a18b8, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:45,112 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625112"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625112"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625112"}]},"ts":"1689089625112"} 2023-07-11 15:33:45,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, REOPEN/MOVE 2023-07-11 15:33:45,116 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=36, state=RUNNABLE; CloseRegionProcedure 417827834f8a6ff7deacc3ace80a18b8, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:45,130 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, REOPEN/MOVE 2023-07-11 15:33:45,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_2018559090, current retry=0 2023-07-11 15:33:45,133 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=e9a0df4772b35ec85dde21e5af054331, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:45,133 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089625133"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625133"}]},"ts":"1689089625133"} 2023-07-11 15:33:45,137 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=38, state=RUNNABLE; CloseRegionProcedure e9a0df4772b35ec85dde21e5af054331, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:45,138 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 15:33:45,139 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-11 15:33:45,139 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:33:45,139 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-11 15:33:45,139 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 15:33:45,139 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-11 15:33:45,140 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-11 15:33:45,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:45,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing b1236b49cb63808d6f804e5665941941, disabling compactions & flushes 2023-07-11 15:33:45,243 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:45,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:45,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. after waiting 0 ms 2023-07-11 15:33:45,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:45,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:45,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:45,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for b1236b49cb63808d6f804e5665941941: 2023-07-11 15:33:45,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding b1236b49cb63808d6f804e5665941941 move to jenkins-hbase9.apache.org,36133,1689089616857 record at close sequenceid=2 2023-07-11 15:33:45,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 61d773bf64cca2abfaf2347b17a50a35, disabling compactions & flushes 2023-07-11 15:33:45,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. after waiting 0 ms 2023-07-11 15:33:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:45,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:45,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:45,266 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b1236b49cb63808d6f804e5665941941, regionState=CLOSED 2023-07-11 15:33:45,266 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089625265"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089625265"}]},"ts":"1689089625265"} 2023-07-11 15:33:45,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 417827834f8a6ff7deacc3ace80a18b8, disabling compactions & flushes 2023-07-11 15:33:45,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:45,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:45,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. after waiting 0 ms 2023-07-11 15:33:45,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:45,271 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=32 2023-07-11 15:33:45,271 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=32, state=SUCCESS; CloseRegionProcedure b1236b49cb63808d6f804e5665941941, server=jenkins-hbase9.apache.org,43957,1689089616370 in 196 msec 2023-07-11 15:33:45,272 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,36133,1689089616857; forceNewPlan=false, retain=false 2023-07-11 15:33:45,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:45,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:45,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:45,283 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 417827834f8a6ff7deacc3ace80a18b8: 2023-07-11 15:33:45,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 417827834f8a6ff7deacc3ace80a18b8 move to jenkins-hbase9.apache.org,36133,1689089616857 record at close sequenceid=2 2023-07-11 15:33:45,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:45,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 61d773bf64cca2abfaf2347b17a50a35: 2023-07-11 15:33:45,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 61d773bf64cca2abfaf2347b17a50a35 move to jenkins-hbase9.apache.org,42495,1689089616669 record at close sequenceid=2 2023-07-11 15:33:45,287 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:45,288 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=417827834f8a6ff7deacc3ace80a18b8, regionState=CLOSED 2023-07-11 15:33:45,288 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625288"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089625288"}]},"ts":"1689089625288"} 2023-07-11 15:33:45,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:45,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:45,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing f6a5a51ec794ea32a8ed5c0d2b67a618, disabling compactions & flushes 2023-07-11 15:33:45,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:45,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:45,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. after waiting 0 ms 2023-07-11 15:33:45,290 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=61d773bf64cca2abfaf2347b17a50a35, regionState=CLOSED 2023-07-11 15:33:45,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:45,290 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625290"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089625290"}]},"ts":"1689089625290"} 2023-07-11 15:33:45,295 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=36 2023-07-11 15:33:45,295 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=36, state=SUCCESS; CloseRegionProcedure 417827834f8a6ff7deacc3ace80a18b8, server=jenkins-hbase9.apache.org,43957,1689089616370 in 176 msec 2023-07-11 15:33:45,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:45,300 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:45,300 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for f6a5a51ec794ea32a8ed5c0d2b67a618: 2023-07-11 15:33:45,300 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,36133,1689089616857; forceNewPlan=false, retain=false 2023-07-11 15:33:45,300 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding f6a5a51ec794ea32a8ed5c0d2b67a618 move to jenkins-hbase9.apache.org,42495,1689089616669 record at close sequenceid=2 2023-07-11 15:33:45,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-11 15:33:45,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; CloseRegionProcedure 61d773bf64cca2abfaf2347b17a50a35, server=jenkins-hbase9.apache.org,45349,1689089620952 in 197 msec 2023-07-11 15:33:45,301 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,42495,1689089616669; forceNewPlan=false, retain=false 2023-07-11 15:33:45,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:45,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:45,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e9a0df4772b35ec85dde21e5af054331, disabling compactions & flushes 2023-07-11 15:33:45,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:45,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:45,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. after waiting 0 ms 2023-07-11 15:33:45,303 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=f6a5a51ec794ea32a8ed5c0d2b67a618, regionState=CLOSED 2023-07-11 15:33:45,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:45,304 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625303"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089625303"}]},"ts":"1689089625303"} 2023-07-11 15:33:45,309 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:45,310 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=35 2023-07-11 15:33:45,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:45,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=35, state=SUCCESS; CloseRegionProcedure f6a5a51ec794ea32a8ed5c0d2b67a618, server=jenkins-hbase9.apache.org,45349,1689089620952 in 196 msec 2023-07-11 15:33:45,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e9a0df4772b35ec85dde21e5af054331: 2023-07-11 15:33:45,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding e9a0df4772b35ec85dde21e5af054331 move to jenkins-hbase9.apache.org,42495,1689089616669 record at close sequenceid=2 2023-07-11 15:33:45,312 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,42495,1689089616669; forceNewPlan=false, retain=false 2023-07-11 15:33:45,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:45,315 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=e9a0df4772b35ec85dde21e5af054331, regionState=CLOSED 2023-07-11 15:33:45,315 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089625315"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089625315"}]},"ts":"1689089625315"} 2023-07-11 15:33:45,320 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=38 2023-07-11 15:33:45,321 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; CloseRegionProcedure e9a0df4772b35ec85dde21e5af054331, server=jenkins-hbase9.apache.org,45349,1689089620952 in 180 msec 2023-07-11 15:33:45,322 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,42495,1689089616669; forceNewPlan=false, retain=false 2023-07-11 15:33:45,422 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-11 15:33:45,423 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=f6a5a51ec794ea32a8ed5c0d2b67a618, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:45,423 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=e9a0df4772b35ec85dde21e5af054331, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:45,423 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625423"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625423"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625423"}]},"ts":"1689089625423"} 2023-07-11 15:33:45,423 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089625423"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625423"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625423"}]},"ts":"1689089625423"} 2023-07-11 15:33:45,423 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=61d773bf64cca2abfaf2347b17a50a35, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:45,424 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625423"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625423"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625423"}]},"ts":"1689089625423"} 2023-07-11 15:33:45,424 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=417827834f8a6ff7deacc3ace80a18b8, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:45,424 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b1236b49cb63808d6f804e5665941941, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:45,424 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625424"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625424"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625424"}]},"ts":"1689089625424"} 2023-07-11 15:33:45,424 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089625424"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089625424"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089625424"}]},"ts":"1689089625424"} 2023-07-11 15:33:45,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=35, state=RUNNABLE; OpenRegionProcedure f6a5a51ec794ea32a8ed5c0d2b67a618, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:45,432 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=38, state=RUNNABLE; OpenRegionProcedure e9a0df4772b35ec85dde21e5af054331, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:45,433 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=33, state=RUNNABLE; OpenRegionProcedure 61d773bf64cca2abfaf2347b17a50a35, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:45,435 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=36, state=RUNNABLE; OpenRegionProcedure 417827834f8a6ff7deacc3ace80a18b8, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:45,438 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=32, state=RUNNABLE; OpenRegionProcedure b1236b49cb63808d6f804e5665941941, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:45,584 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:45,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e9a0df4772b35ec85dde21e5af054331, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-11 15:33:45,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:45,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:45,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:45,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:45,590 INFO [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:45,591 DEBUG [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/f 2023-07-11 15:33:45,591 DEBUG [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/f 2023-07-11 15:33:45,592 INFO [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e9a0df4772b35ec85dde21e5af054331 columnFamilyName f 2023-07-11 15:33:45,592 INFO [StoreOpener-e9a0df4772b35ec85dde21e5af054331-1] regionserver.HStore(310): Store=e9a0df4772b35ec85dde21e5af054331/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:45,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:45,595 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:45,595 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:33:45,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:45,603 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:39136, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:33:45,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:45,607 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:45,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b1236b49cb63808d6f804e5665941941, NAME => 'Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-11 15:33:45,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:45,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:45,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:45,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:45,608 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e9a0df4772b35ec85dde21e5af054331; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9854128800, jitterRate=-0.08226273953914642}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:45,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e9a0df4772b35ec85dde21e5af054331: 2023-07-11 15:33:45,610 INFO [StoreOpener-b1236b49cb63808d6f804e5665941941-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:45,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331., pid=43, masterSystemTime=1689089625579 2023-07-11 15:33:45,612 DEBUG [StoreOpener-b1236b49cb63808d6f804e5665941941-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/f 2023-07-11 15:33:45,612 DEBUG [StoreOpener-b1236b49cb63808d6f804e5665941941-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/f 2023-07-11 15:33:45,612 INFO [StoreOpener-b1236b49cb63808d6f804e5665941941-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b1236b49cb63808d6f804e5665941941 columnFamilyName f 2023-07-11 15:33:45,613 INFO [StoreOpener-b1236b49cb63808d6f804e5665941941-1] regionserver.HStore(310): Store=b1236b49cb63808d6f804e5665941941/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:45,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:45,615 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:45,615 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:45,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61d773bf64cca2abfaf2347b17a50a35, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-11 15:33:45,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:45,616 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=e9a0df4772b35ec85dde21e5af054331, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:45,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:45,616 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089625615"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089625615"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089625615"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089625615"}]},"ts":"1689089625615"} 2023-07-11 15:33:45,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:45,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:45,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:45,618 INFO [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:45,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:45,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:45,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=38 2023-07-11 15:33:45,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; OpenRegionProcedure e9a0df4772b35ec85dde21e5af054331, server=jenkins-hbase9.apache.org,42495,1689089616669 in 192 msec 2023-07-11 15:33:45,629 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened b1236b49cb63808d6f804e5665941941; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9931150400, jitterRate=-0.07508954405784607}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:45,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for b1236b49cb63808d6f804e5665941941: 2023-07-11 15:33:45,630 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941., pid=46, masterSystemTime=1689089625595 2023-07-11 15:33:45,637 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, REOPEN/MOVE in 520 msec 2023-07-11 15:33:45,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:45,637 DEBUG [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/f 2023-07-11 15:33:45,638 DEBUG [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/f 2023-07-11 15:33:45,638 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:45,638 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:45,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 417827834f8a6ff7deacc3ace80a18b8, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-11 15:33:45,638 INFO [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61d773bf64cca2abfaf2347b17a50a35 columnFamilyName f 2023-07-11 15:33:45,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:45,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:45,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:45,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:45,639 INFO [StoreOpener-61d773bf64cca2abfaf2347b17a50a35-1] regionserver.HStore(310): Store=61d773bf64cca2abfaf2347b17a50a35/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:45,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:45,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:45,643 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b1236b49cb63808d6f804e5665941941, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:45,644 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089625643"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089625643"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089625643"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089625643"}]},"ts":"1689089625643"} 2023-07-11 15:33:45,645 INFO [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:45,647 DEBUG [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/f 2023-07-11 15:33:45,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:45,647 DEBUG [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/f 2023-07-11 15:33:45,648 INFO [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 417827834f8a6ff7deacc3ace80a18b8 columnFamilyName f 2023-07-11 15:33:45,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 61d773bf64cca2abfaf2347b17a50a35; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11789683200, jitterRate=0.09799981117248535}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:45,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 61d773bf64cca2abfaf2347b17a50a35: 2023-07-11 15:33:45,650 INFO [StoreOpener-417827834f8a6ff7deacc3ace80a18b8-1] regionserver.HStore(310): Store=417827834f8a6ff7deacc3ace80a18b8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:45,651 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=32 2023-07-11 15:33:45,651 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35., pid=44, masterSystemTime=1689089625579 2023-07-11 15:33:45,651 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=32, state=SUCCESS; OpenRegionProcedure b1236b49cb63808d6f804e5665941941, server=jenkins-hbase9.apache.org,36133,1689089616857 in 208 msec 2023-07-11 15:33:45,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:45,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:45,654 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, REOPEN/MOVE in 586 msec 2023-07-11 15:33:45,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:45,654 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:45,654 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:45,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6a5a51ec794ea32a8ed5c0d2b67a618, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-11 15:33:45,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:45,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:45,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:45,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:45,656 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=61d773bf64cca2abfaf2347b17a50a35, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:45,656 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625656"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089625656"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089625656"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089625656"}]},"ts":"1689089625656"} 2023-07-11 15:33:45,658 INFO [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:45,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:45,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 417827834f8a6ff7deacc3ace80a18b8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9950150720, jitterRate=-0.07332000136375427}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:45,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 417827834f8a6ff7deacc3ace80a18b8: 2023-07-11 15:33:45,661 DEBUG [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/f 2023-07-11 15:33:45,661 DEBUG [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/f 2023-07-11 15:33:45,661 INFO [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6a5a51ec794ea32a8ed5c0d2b67a618 columnFamilyName f 2023-07-11 15:33:45,662 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8., pid=45, masterSystemTime=1689089625595 2023-07-11 15:33:45,662 INFO [StoreOpener-f6a5a51ec794ea32a8ed5c0d2b67a618-1] regionserver.HStore(310): Store=f6a5a51ec794ea32a8ed5c0d2b67a618/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:45,663 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=33 2023-07-11 15:33:45,663 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=33, state=SUCCESS; OpenRegionProcedure 61d773bf64cca2abfaf2347b17a50a35, server=jenkins-hbase9.apache.org,42495,1689089616669 in 225 msec 2023-07-11 15:33:45,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:45,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:45,666 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=417827834f8a6ff7deacc3ace80a18b8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:45,666 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625666"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089625666"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089625666"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089625666"}]},"ts":"1689089625666"} 2023-07-11 15:33:45,668 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, REOPEN/MOVE in 595 msec 2023-07-11 15:33:45,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:45,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:45,673 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=36 2023-07-11 15:33:45,673 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=36, state=SUCCESS; OpenRegionProcedure 417827834f8a6ff7deacc3ace80a18b8, server=jenkins-hbase9.apache.org,36133,1689089616857 in 235 msec 2023-07-11 15:33:45,676 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, REOPEN/MOVE in 580 msec 2023-07-11 15:33:45,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:45,683 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened f6a5a51ec794ea32a8ed5c0d2b67a618; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9616027520, jitterRate=-0.10443764925003052}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:45,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for f6a5a51ec794ea32a8ed5c0d2b67a618: 2023-07-11 15:33:45,686 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618., pid=42, masterSystemTime=1689089625579 2023-07-11 15:33:45,688 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:45,688 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:45,689 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=f6a5a51ec794ea32a8ed5c0d2b67a618, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:45,689 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089625689"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089625689"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089625689"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089625689"}]},"ts":"1689089625689"} 2023-07-11 15:33:45,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=35 2023-07-11 15:33:45,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=35, state=SUCCESS; OpenRegionProcedure f6a5a51ec794ea32a8ed5c0d2b67a618, server=jenkins-hbase9.apache.org,42495,1689089616669 in 265 msec 2023-07-11 15:33:45,697 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, REOPEN/MOVE in 608 msec 2023-07-11 15:33:46,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=32 2023-07-11 15:33:46,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_2018559090. 2023-07-11 15:33:46,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:46,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:46,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:46,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:46,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:33:46,142 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:46,149 INFO [Listener at localhost/45661] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:46,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:46,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=47, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:46,173 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089626173"}]},"ts":"1689089626173"} 2023-07-11 15:33:46,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-11 15:33:46,175 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-11 15:33:46,177 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-11 15:33:46,183 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, UNASSIGN}, {pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, UNASSIGN}, {pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, UNASSIGN}, {pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, UNASSIGN}, {pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, UNASSIGN}] 2023-07-11 15:33:46,187 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, UNASSIGN 2023-07-11 15:33:46,188 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, UNASSIGN 2023-07-11 15:33:46,188 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, UNASSIGN 2023-07-11 15:33:46,189 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, UNASSIGN 2023-07-11 15:33:46,190 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, UNASSIGN 2023-07-11 15:33:46,191 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b1236b49cb63808d6f804e5665941941, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:46,191 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=61d773bf64cca2abfaf2347b17a50a35, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:46,191 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089626191"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089626191"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089626191"}]},"ts":"1689089626191"} 2023-07-11 15:33:46,191 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089626191"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089626191"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089626191"}]},"ts":"1689089626191"} 2023-07-11 15:33:46,192 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=417827834f8a6ff7deacc3ace80a18b8, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:46,192 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089626192"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089626192"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089626192"}]},"ts":"1689089626192"} 2023-07-11 15:33:46,192 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=f6a5a51ec794ea32a8ed5c0d2b67a618, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:46,193 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089626192"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089626192"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089626192"}]},"ts":"1689089626192"} 2023-07-11 15:33:46,194 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=48, state=RUNNABLE; CloseRegionProcedure b1236b49cb63808d6f804e5665941941, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:46,194 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=e9a0df4772b35ec85dde21e5af054331, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:46,194 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089626194"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089626194"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089626194"}]},"ts":"1689089626194"} 2023-07-11 15:33:46,199 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=49, state=RUNNABLE; CloseRegionProcedure 61d773bf64cca2abfaf2347b17a50a35, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:46,201 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=51, state=RUNNABLE; CloseRegionProcedure 417827834f8a6ff7deacc3ace80a18b8, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:46,202 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=50, state=RUNNABLE; CloseRegionProcedure f6a5a51ec794ea32a8ed5c0d2b67a618, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:46,214 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=52, state=RUNNABLE; CloseRegionProcedure e9a0df4772b35ec85dde21e5af054331, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:46,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-11 15:33:46,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:46,350 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing b1236b49cb63808d6f804e5665941941, disabling compactions & flushes 2023-07-11 15:33:46,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:46,350 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:46,350 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. after waiting 0 ms 2023-07-11 15:33:46,350 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:46,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:46,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing f6a5a51ec794ea32a8ed5c0d2b67a618, disabling compactions & flushes 2023-07-11 15:33:46,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:46,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:46,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. after waiting 0 ms 2023-07-11 15:33:46,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:46,370 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:33:46,374 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941. 2023-07-11 15:33:46,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for b1236b49cb63808d6f804e5665941941: 2023-07-11 15:33:46,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:46,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:46,378 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b1236b49cb63808d6f804e5665941941, regionState=CLOSED 2023-07-11 15:33:46,378 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089626378"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626378"}]},"ts":"1689089626378"} 2023-07-11 15:33:46,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 417827834f8a6ff7deacc3ace80a18b8, disabling compactions & flushes 2023-07-11 15:33:46,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:46,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:46,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. after waiting 0 ms 2023-07-11 15:33:46,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:46,403 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=48 2023-07-11 15:33:46,403 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=48, state=SUCCESS; CloseRegionProcedure b1236b49cb63808d6f804e5665941941, server=jenkins-hbase9.apache.org,36133,1689089616857 in 199 msec 2023-07-11 15:33:46,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:33:46,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:33:46,409 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1236b49cb63808d6f804e5665941941, UNASSIGN in 223 msec 2023-07-11 15:33:46,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618. 2023-07-11 15:33:46,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for f6a5a51ec794ea32a8ed5c0d2b67a618: 2023-07-11 15:33:46,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8. 2023-07-11 15:33:46,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 417827834f8a6ff7deacc3ace80a18b8: 2023-07-11 15:33:46,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:46,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:46,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e9a0df4772b35ec85dde21e5af054331, disabling compactions & flushes 2023-07-11 15:33:46,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:46,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:46,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. after waiting 0 ms 2023-07-11 15:33:46,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:46,416 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=f6a5a51ec794ea32a8ed5c0d2b67a618, regionState=CLOSED 2023-07-11 15:33:46,416 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089626416"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626416"}]},"ts":"1689089626416"} 2023-07-11 15:33:46,417 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:46,418 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=417827834f8a6ff7deacc3ace80a18b8, regionState=CLOSED 2023-07-11 15:33:46,419 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089626418"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626418"}]},"ts":"1689089626418"} 2023-07-11 15:33:46,422 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=50 2023-07-11 15:33:46,423 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=50, state=SUCCESS; CloseRegionProcedure f6a5a51ec794ea32a8ed5c0d2b67a618, server=jenkins-hbase9.apache.org,42495,1689089616669 in 217 msec 2023-07-11 15:33:46,429 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=51 2023-07-11 15:33:46,429 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6a5a51ec794ea32a8ed5c0d2b67a618, UNASSIGN in 242 msec 2023-07-11 15:33:46,429 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=51, state=SUCCESS; CloseRegionProcedure 417827834f8a6ff7deacc3ace80a18b8, server=jenkins-hbase9.apache.org,36133,1689089616857 in 220 msec 2023-07-11 15:33:46,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:33:46,435 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=417827834f8a6ff7deacc3ace80a18b8, UNASSIGN in 249 msec 2023-07-11 15:33:46,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331. 2023-07-11 15:33:46,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e9a0df4772b35ec85dde21e5af054331: 2023-07-11 15:33:46,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:46,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:46,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 61d773bf64cca2abfaf2347b17a50a35, disabling compactions & flushes 2023-07-11 15:33:46,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:46,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:46,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. after waiting 0 ms 2023-07-11 15:33:46,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:46,441 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=e9a0df4772b35ec85dde21e5af054331, regionState=CLOSED 2023-07-11 15:33:46,441 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089626441"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626441"}]},"ts":"1689089626441"} 2023-07-11 15:33:46,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:33:46,446 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-11 15:33:46,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35. 2023-07-11 15:33:46,446 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; CloseRegionProcedure e9a0df4772b35ec85dde21e5af054331, server=jenkins-hbase9.apache.org,42495,1689089616669 in 229 msec 2023-07-11 15:33:46,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 61d773bf64cca2abfaf2347b17a50a35: 2023-07-11 15:33:46,448 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e9a0df4772b35ec85dde21e5af054331, UNASSIGN in 263 msec 2023-07-11 15:33:46,448 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:46,449 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=61d773bf64cca2abfaf2347b17a50a35, regionState=CLOSED 2023-07-11 15:33:46,449 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089626449"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626449"}]},"ts":"1689089626449"} 2023-07-11 15:33:46,453 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-11 15:33:46,453 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; CloseRegionProcedure 61d773bf64cca2abfaf2347b17a50a35, server=jenkins-hbase9.apache.org,42495,1689089616669 in 251 msec 2023-07-11 15:33:46,458 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=47 2023-07-11 15:33:46,458 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d773bf64cca2abfaf2347b17a50a35, UNASSIGN in 273 msec 2023-07-11 15:33:46,459 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089626458"}]},"ts":"1689089626458"} 2023-07-11 15:33:46,461 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-11 15:33:46,465 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-11 15:33:46,473 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=47, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 313 msec 2023-07-11 15:33:46,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-11 15:33:46,478 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 47 completed 2023-07-11 15:33:46,480 INFO [Listener at localhost/45661] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:46,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$6(2260): Client=jenkins//172.31.2.10 truncate Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:46,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=58, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-11 15:33:46,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-11 15:33:46,498 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-11 15:33:46,515 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:46,515 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:46,515 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:46,515 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:46,515 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:46,521 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/recovered.edits] 2023-07-11 15:33:46,521 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/recovered.edits] 2023-07-11 15:33:46,521 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/recovered.edits] 2023-07-11 15:33:46,522 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/recovered.edits] 2023-07-11 15:33:46,522 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/recovered.edits] 2023-07-11 15:33:46,546 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/recovered.edits/7.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331/recovered.edits/7.seqid 2023-07-11 15:33:46,546 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/recovered.edits/7.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618/recovered.edits/7.seqid 2023-07-11 15:33:46,546 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/recovered.edits/7.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35/recovered.edits/7.seqid 2023-07-11 15:33:46,548 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e9a0df4772b35ec85dde21e5af054331 2023-07-11 15:33:46,548 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/recovered.edits/7.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8/recovered.edits/7.seqid 2023-07-11 15:33:46,548 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6a5a51ec794ea32a8ed5c0d2b67a618 2023-07-11 15:33:46,548 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d773bf64cca2abfaf2347b17a50a35 2023-07-11 15:33:46,549 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/recovered.edits/7.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941/recovered.edits/7.seqid 2023-07-11 15:33:46,549 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/417827834f8a6ff7deacc3ace80a18b8 2023-07-11 15:33:46,550 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1236b49cb63808d6f804e5665941941 2023-07-11 15:33:46,550 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-11 15:33:46,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-11 15:33:46,614 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-11 15:33:46,639 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-11 15:33:46,640 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-11 15:33:46,640 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089626640"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:46,640 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089626640"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:46,640 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089626640"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:46,640 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089626640"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:46,641 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089626640"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:46,644 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-11 15:33:46,645 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b1236b49cb63808d6f804e5665941941, NAME => 'Group_testTableMoveTruncateAndDrop,,1689089623752.b1236b49cb63808d6f804e5665941941.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 61d773bf64cca2abfaf2347b17a50a35, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689089623752.61d773bf64cca2abfaf2347b17a50a35.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => f6a5a51ec794ea32a8ed5c0d2b67a618, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089623752.f6a5a51ec794ea32a8ed5c0d2b67a618.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 417827834f8a6ff7deacc3ace80a18b8, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089623752.417827834f8a6ff7deacc3ace80a18b8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => e9a0df4772b35ec85dde21e5af054331, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689089623752.e9a0df4772b35ec85dde21e5af054331.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-11 15:33:46,645 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-11 15:33:46,645 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689089626645"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:46,656 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-11 15:33:46,669 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:46,669 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:46,669 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:46,669 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:46,669 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:46,670 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b empty. 2023-07-11 15:33:46,670 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524 empty. 2023-07-11 15:33:46,671 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e empty. 2023-07-11 15:33:46,671 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df empty. 2023-07-11 15:33:46,671 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7 empty. 2023-07-11 15:33:46,671 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:46,673 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:46,673 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:46,673 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:46,673 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:46,674 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-11 15:33:46,704 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:46,705 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => cc720fab572ab2b0f6ca5eb34f6e1e6e, NAME => 'Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:46,706 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8b088c93ef4c397f12ab4e9602e94524, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:46,706 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => b417194915b339c95a97a8b198ff8fa7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:46,758 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:46,758 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing b417194915b339c95a97a8b198ff8fa7, disabling compactions & flushes 2023-07-11 15:33:46,758 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:46,758 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:46,758 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. after waiting 0 ms 2023-07-11 15:33:46,758 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:46,758 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:46,758 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for b417194915b339c95a97a8b198ff8fa7: 2023-07-11 15:33:46,759 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 54d6adb40b2e6374dffab57082261d1b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:46,761 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:46,765 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 8b088c93ef4c397f12ab4e9602e94524, disabling compactions & flushes 2023-07-11 15:33:46,765 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:46,765 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:46,765 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. after waiting 0 ms 2023-07-11 15:33:46,765 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:46,765 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:46,765 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 8b088c93ef4c397f12ab4e9602e94524: 2023-07-11 15:33:46,766 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8de5a620c5bdab0088653916fd6ce7df, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:46,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:46,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing cc720fab572ab2b0f6ca5eb34f6e1e6e, disabling compactions & flushes 2023-07-11 15:33:46,798 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:46,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:46,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. after waiting 0 ms 2023-07-11 15:33:46,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:46,798 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:46,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for cc720fab572ab2b0f6ca5eb34f6e1e6e: 2023-07-11 15:33:46,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-11 15:33:46,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:46,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 54d6adb40b2e6374dffab57082261d1b, disabling compactions & flushes 2023-07-11 15:33:46,822 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:46,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:46,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. after waiting 0 ms 2023-07-11 15:33:46,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:46,822 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:46,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 54d6adb40b2e6374dffab57082261d1b: 2023-07-11 15:33:46,830 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:46,830 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 8de5a620c5bdab0088653916fd6ce7df, disabling compactions & flushes 2023-07-11 15:33:46,830 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:46,830 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:46,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. after waiting 0 ms 2023-07-11 15:33:46,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:46,831 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:46,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 8de5a620c5bdab0088653916fd6ce7df: 2023-07-11 15:33:46,835 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089626835"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626835"}]},"ts":"1689089626835"} 2023-07-11 15:33:46,835 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089626835"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626835"}]},"ts":"1689089626835"} 2023-07-11 15:33:46,835 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089626835"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626835"}]},"ts":"1689089626835"} 2023-07-11 15:33:46,835 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089626835"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626835"}]},"ts":"1689089626835"} 2023-07-11 15:33:46,835 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089626835"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089626835"}]},"ts":"1689089626835"} 2023-07-11 15:33:46,841 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-11 15:33:46,850 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089626850"}]},"ts":"1689089626850"} 2023-07-11 15:33:46,853 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-11 15:33:46,858 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:46,858 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:46,858 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:46,858 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:46,859 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc720fab572ab2b0f6ca5eb34f6e1e6e, ASSIGN}, {pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b088c93ef4c397f12ab4e9602e94524, ASSIGN}, {pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b417194915b339c95a97a8b198ff8fa7, ASSIGN}, {pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=54d6adb40b2e6374dffab57082261d1b, ASSIGN}, {pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8de5a620c5bdab0088653916fd6ce7df, ASSIGN}] 2023-07-11 15:33:46,862 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=54d6adb40b2e6374dffab57082261d1b, ASSIGN 2023-07-11 15:33:46,862 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b088c93ef4c397f12ab4e9602e94524, ASSIGN 2023-07-11 15:33:46,863 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc720fab572ab2b0f6ca5eb34f6e1e6e, ASSIGN 2023-07-11 15:33:46,863 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8de5a620c5bdab0088653916fd6ce7df, ASSIGN 2023-07-11 15:33:46,863 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b417194915b339c95a97a8b198ff8fa7, ASSIGN 2023-07-11 15:33:46,865 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=54d6adb40b2e6374dffab57082261d1b, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42495,1689089616669; forceNewPlan=false, retain=false 2023-07-11 15:33:46,865 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b088c93ef4c397f12ab4e9602e94524, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42495,1689089616669; forceNewPlan=false, retain=false 2023-07-11 15:33:46,865 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8de5a620c5bdab0088653916fd6ce7df, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,36133,1689089616857; forceNewPlan=false, retain=false 2023-07-11 15:33:46,866 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc720fab572ab2b0f6ca5eb34f6e1e6e, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,36133,1689089616857; forceNewPlan=false, retain=false 2023-07-11 15:33:46,866 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b417194915b339c95a97a8b198ff8fa7, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,36133,1689089616857; forceNewPlan=false, retain=false 2023-07-11 15:33:47,015 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-11 15:33:47,019 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=54d6adb40b2e6374dffab57082261d1b, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:47,019 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=8b088c93ef4c397f12ab4e9602e94524, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:47,020 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=b417194915b339c95a97a8b198ff8fa7, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:47,020 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627019"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627019"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627019"}]},"ts":"1689089627019"} 2023-07-11 15:33:47,020 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627019"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627019"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627019"}]},"ts":"1689089627019"} 2023-07-11 15:33:47,020 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=cc720fab572ab2b0f6ca5eb34f6e1e6e, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:47,020 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089627020"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627020"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627020"}]},"ts":"1689089627020"} 2023-07-11 15:33:47,019 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=8de5a620c5bdab0088653916fd6ce7df, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:47,021 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089627019"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627019"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627019"}]},"ts":"1689089627019"} 2023-07-11 15:33:47,020 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627019"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627019"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627019"}]},"ts":"1689089627019"} 2023-07-11 15:33:47,022 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=61, state=RUNNABLE; OpenRegionProcedure b417194915b339c95a97a8b198ff8fa7, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:47,024 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=60, state=RUNNABLE; OpenRegionProcedure 8b088c93ef4c397f12ab4e9602e94524, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:47,026 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=59, state=RUNNABLE; OpenRegionProcedure cc720fab572ab2b0f6ca5eb34f6e1e6e, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:47,030 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=63, state=RUNNABLE; OpenRegionProcedure 8de5a620c5bdab0088653916fd6ce7df, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:47,032 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=62, state=RUNNABLE; OpenRegionProcedure 54d6adb40b2e6374dffab57082261d1b, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:47,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-11 15:33:47,189 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:47,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b417194915b339c95a97a8b198ff8fa7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-11 15:33:47,189 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:47,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 54d6adb40b2e6374dffab57082261d1b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-11 15:33:47,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:47,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:47,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,192 INFO [StoreOpener-b417194915b339c95a97a8b198ff8fa7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,192 INFO [StoreOpener-54d6adb40b2e6374dffab57082261d1b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,195 DEBUG [StoreOpener-54d6adb40b2e6374dffab57082261d1b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b/f 2023-07-11 15:33:47,195 DEBUG [StoreOpener-b417194915b339c95a97a8b198ff8fa7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7/f 2023-07-11 15:33:47,195 DEBUG [StoreOpener-54d6adb40b2e6374dffab57082261d1b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b/f 2023-07-11 15:33:47,195 DEBUG [StoreOpener-b417194915b339c95a97a8b198ff8fa7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7/f 2023-07-11 15:33:47,196 INFO [StoreOpener-54d6adb40b2e6374dffab57082261d1b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 54d6adb40b2e6374dffab57082261d1b columnFamilyName f 2023-07-11 15:33:47,197 INFO [StoreOpener-54d6adb40b2e6374dffab57082261d1b-1] regionserver.HStore(310): Store=54d6adb40b2e6374dffab57082261d1b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:47,197 INFO [StoreOpener-b417194915b339c95a97a8b198ff8fa7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b417194915b339c95a97a8b198ff8fa7 columnFamilyName f 2023-07-11 15:33:47,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,200 INFO [StoreOpener-b417194915b339c95a97a8b198ff8fa7-1] regionserver.HStore(310): Store=b417194915b339c95a97a8b198ff8fa7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:47,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:47,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 54d6adb40b2e6374dffab57082261d1b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10176670400, jitterRate=-0.05222371220588684}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:47,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 54d6adb40b2e6374dffab57082261d1b: 2023-07-11 15:33:47,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b., pid=68, masterSystemTime=1689089627183 2023-07-11 15:33:47,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:47,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened b417194915b339c95a97a8b198ff8fa7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11755876160, jitterRate=0.0948512852191925}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:47,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for b417194915b339c95a97a8b198ff8fa7: 2023-07-11 15:33:47,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7., pid=64, masterSystemTime=1689089627182 2023-07-11 15:33:47,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:47,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:47,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:47,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8b088c93ef4c397f12ab4e9602e94524, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-11 15:33:47,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:47,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,221 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=54d6adb40b2e6374dffab57082261d1b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:47,222 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627221"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089627221"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089627221"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089627221"}]},"ts":"1689089627221"} 2023-07-11 15:33:47,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:47,223 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:47,223 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:47,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8de5a620c5bdab0088653916fd6ce7df, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-11 15:33:47,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:47,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,226 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=b417194915b339c95a97a8b198ff8fa7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:47,227 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627226"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089627226"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089627226"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089627226"}]},"ts":"1689089627226"} 2023-07-11 15:33:47,242 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=62 2023-07-11 15:33:47,242 INFO [StoreOpener-8b088c93ef4c397f12ab4e9602e94524-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,242 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=62, state=SUCCESS; OpenRegionProcedure 54d6adb40b2e6374dffab57082261d1b, server=jenkins-hbase9.apache.org,42495,1689089616669 in 195 msec 2023-07-11 15:33:47,242 INFO [StoreOpener-8de5a620c5bdab0088653916fd6ce7df-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,245 DEBUG [StoreOpener-8de5a620c5bdab0088653916fd6ce7df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df/f 2023-07-11 15:33:47,245 DEBUG [StoreOpener-8de5a620c5bdab0088653916fd6ce7df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df/f 2023-07-11 15:33:47,246 INFO [StoreOpener-8de5a620c5bdab0088653916fd6ce7df-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8de5a620c5bdab0088653916fd6ce7df columnFamilyName f 2023-07-11 15:33:47,246 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=61 2023-07-11 15:33:47,246 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=54d6adb40b2e6374dffab57082261d1b, ASSIGN in 383 msec 2023-07-11 15:33:47,246 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=61, state=SUCCESS; OpenRegionProcedure b417194915b339c95a97a8b198ff8fa7, server=jenkins-hbase9.apache.org,36133,1689089616857 in 209 msec 2023-07-11 15:33:47,246 INFO [StoreOpener-8de5a620c5bdab0088653916fd6ce7df-1] regionserver.HStore(310): Store=8de5a620c5bdab0088653916fd6ce7df/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:47,248 DEBUG [StoreOpener-8b088c93ef4c397f12ab4e9602e94524-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524/f 2023-07-11 15:33:47,248 DEBUG [StoreOpener-8b088c93ef4c397f12ab4e9602e94524-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524/f 2023-07-11 15:33:47,249 INFO [StoreOpener-8b088c93ef4c397f12ab4e9602e94524-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8b088c93ef4c397f12ab4e9602e94524 columnFamilyName f 2023-07-11 15:33:47,249 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b417194915b339c95a97a8b198ff8fa7, ASSIGN in 388 msec 2023-07-11 15:33:47,249 INFO [StoreOpener-8b088c93ef4c397f12ab4e9602e94524-1] regionserver.HStore(310): Store=8b088c93ef4c397f12ab4e9602e94524/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:47,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:47,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:47,266 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 8b088c93ef4c397f12ab4e9602e94524; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11526698080, jitterRate=0.0735074132680893}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:47,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 8b088c93ef4c397f12ab4e9602e94524: 2023-07-11 15:33:47,266 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 8de5a620c5bdab0088653916fd6ce7df; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11056091520, jitterRate=0.029678761959075928}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:47,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 8de5a620c5bdab0088653916fd6ce7df: 2023-07-11 15:33:47,268 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df., pid=67, masterSystemTime=1689089627182 2023-07-11 15:33:47,268 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524., pid=65, masterSystemTime=1689089627183 2023-07-11 15:33:47,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:47,274 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:47,274 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:47,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc720fab572ab2b0f6ca5eb34f6e1e6e, NAME => 'Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-11 15:33:47,274 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=8de5a620c5bdab0088653916fd6ce7df, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:47,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:47,274 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089627274"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089627274"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089627274"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089627274"}]},"ts":"1689089627274"} 2023-07-11 15:33:47,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:47,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:47,276 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=8b088c93ef4c397f12ab4e9602e94524, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:47,276 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627276"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089627276"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089627276"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089627276"}]},"ts":"1689089627276"} 2023-07-11 15:33:47,278 INFO [StoreOpener-cc720fab572ab2b0f6ca5eb34f6e1e6e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,281 DEBUG [StoreOpener-cc720fab572ab2b0f6ca5eb34f6e1e6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e/f 2023-07-11 15:33:47,281 DEBUG [StoreOpener-cc720fab572ab2b0f6ca5eb34f6e1e6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e/f 2023-07-11 15:33:47,282 INFO [StoreOpener-cc720fab572ab2b0f6ca5eb34f6e1e6e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc720fab572ab2b0f6ca5eb34f6e1e6e columnFamilyName f 2023-07-11 15:33:47,284 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=60 2023-07-11 15:33:47,284 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; OpenRegionProcedure 8b088c93ef4c397f12ab4e9602e94524, server=jenkins-hbase9.apache.org,42495,1689089616669 in 256 msec 2023-07-11 15:33:47,284 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=63 2023-07-11 15:33:47,284 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; OpenRegionProcedure 8de5a620c5bdab0088653916fd6ce7df, server=jenkins-hbase9.apache.org,36133,1689089616857 in 253 msec 2023-07-11 15:33:47,285 INFO [StoreOpener-cc720fab572ab2b0f6ca5eb34f6e1e6e-1] regionserver.HStore(310): Store=cc720fab572ab2b0f6ca5eb34f6e1e6e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:47,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,287 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8de5a620c5bdab0088653916fd6ce7df, ASSIGN in 425 msec 2023-07-11 15:33:47,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,289 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b088c93ef4c397f12ab4e9602e94524, ASSIGN in 425 msec 2023-07-11 15:33:47,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:47,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened cc720fab572ab2b0f6ca5eb34f6e1e6e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12053080640, jitterRate=0.12253060936927795}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:47,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for cc720fab572ab2b0f6ca5eb34f6e1e6e: 2023-07-11 15:33:47,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e., pid=66, masterSystemTime=1689089627182 2023-07-11 15:33:47,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:47,302 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:47,304 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=cc720fab572ab2b0f6ca5eb34f6e1e6e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:47,305 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089627304"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089627304"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089627304"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089627304"}]},"ts":"1689089627304"} 2023-07-11 15:33:47,309 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=59 2023-07-11 15:33:47,309 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=59, state=SUCCESS; OpenRegionProcedure cc720fab572ab2b0f6ca5eb34f6e1e6e, server=jenkins-hbase9.apache.org,36133,1689089616857 in 281 msec 2023-07-11 15:33:47,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=58 2023-07-11 15:33:47,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc720fab572ab2b0f6ca5eb34f6e1e6e, ASSIGN in 450 msec 2023-07-11 15:33:47,312 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089627312"}]},"ts":"1689089627312"} 2023-07-11 15:33:47,314 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-11 15:33:47,317 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-11 15:33:47,318 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 830 msec 2023-07-11 15:33:47,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-11 15:33:47,605 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 58 completed 2023-07-11 15:33:47,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:47,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:47,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:47,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:47,609 INFO [Listener at localhost/45661] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:47,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:47,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:47,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-11 15:33:47,615 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089627615"}]},"ts":"1689089627615"} 2023-07-11 15:33:47,616 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-11 15:33:47,618 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-11 15:33:47,619 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc720fab572ab2b0f6ca5eb34f6e1e6e, UNASSIGN}, {pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b088c93ef4c397f12ab4e9602e94524, UNASSIGN}, {pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b417194915b339c95a97a8b198ff8fa7, UNASSIGN}, {pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=54d6adb40b2e6374dffab57082261d1b, UNASSIGN}, {pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8de5a620c5bdab0088653916fd6ce7df, UNASSIGN}] 2023-07-11 15:33:47,621 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc720fab572ab2b0f6ca5eb34f6e1e6e, UNASSIGN 2023-07-11 15:33:47,621 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b088c93ef4c397f12ab4e9602e94524, UNASSIGN 2023-07-11 15:33:47,621 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b417194915b339c95a97a8b198ff8fa7, UNASSIGN 2023-07-11 15:33:47,622 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=54d6adb40b2e6374dffab57082261d1b, UNASSIGN 2023-07-11 15:33:47,623 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8de5a620c5bdab0088653916fd6ce7df, UNASSIGN 2023-07-11 15:33:47,623 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=8b088c93ef4c397f12ab4e9602e94524, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:47,624 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627623"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627623"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627623"}]},"ts":"1689089627623"} 2023-07-11 15:33:47,624 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=cc720fab572ab2b0f6ca5eb34f6e1e6e, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:47,624 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089627623"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627623"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627623"}]},"ts":"1689089627623"} 2023-07-11 15:33:47,625 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=b417194915b339c95a97a8b198ff8fa7, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:47,625 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627624"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627624"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627624"}]},"ts":"1689089627624"} 2023-07-11 15:33:47,625 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=54d6adb40b2e6374dffab57082261d1b, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:33:47,625 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627625"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627625"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627625"}]},"ts":"1689089627625"} 2023-07-11 15:33:47,626 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=8de5a620c5bdab0088653916fd6ce7df, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:47,626 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089627626"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089627626"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089627626"}]},"ts":"1689089627626"} 2023-07-11 15:33:47,627 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=71, state=RUNNABLE; CloseRegionProcedure 8b088c93ef4c397f12ab4e9602e94524, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:47,628 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=70, state=RUNNABLE; CloseRegionProcedure cc720fab572ab2b0f6ca5eb34f6e1e6e, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:47,630 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=72, state=RUNNABLE; CloseRegionProcedure b417194915b339c95a97a8b198ff8fa7, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:47,631 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=73, state=RUNNABLE; CloseRegionProcedure 54d6adb40b2e6374dffab57082261d1b, server=jenkins-hbase9.apache.org,42495,1689089616669}] 2023-07-11 15:33:47,632 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=74, state=RUNNABLE; CloseRegionProcedure 8de5a620c5bdab0088653916fd6ce7df, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:47,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-11 15:33:47,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 8b088c93ef4c397f12ab4e9602e94524, disabling compactions & flushes 2023-07-11 15:33:47,781 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:47,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:47,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. after waiting 0 ms 2023-07-11 15:33:47,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:47,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing cc720fab572ab2b0f6ca5eb34f6e1e6e, disabling compactions & flushes 2023-07-11 15:33:47,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:47,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:47,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. after waiting 0 ms 2023-07-11 15:33:47,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:47,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:47,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524. 2023-07-11 15:33:47,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 8b088c93ef4c397f12ab4e9602e94524: 2023-07-11 15:33:47,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,795 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=8b088c93ef4c397f12ab4e9602e94524, regionState=CLOSED 2023-07-11 15:33:47,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 54d6adb40b2e6374dffab57082261d1b, disabling compactions & flushes 2023-07-11 15:33:47,795 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627795"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089627795"}]},"ts":"1689089627795"} 2023-07-11 15:33:47,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:47,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:47,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. after waiting 0 ms 2023-07-11 15:33:47,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:47,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:47,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:47,802 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b. 2023-07-11 15:33:47,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 54d6adb40b2e6374dffab57082261d1b: 2023-07-11 15:33:47,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e. 2023-07-11 15:33:47,803 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for cc720fab572ab2b0f6ca5eb34f6e1e6e: 2023-07-11 15:33:47,805 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=54d6adb40b2e6374dffab57082261d1b, regionState=CLOSED 2023-07-11 15:33:47,806 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627805"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089627805"}]},"ts":"1689089627805"} 2023-07-11 15:33:47,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing b417194915b339c95a97a8b198ff8fa7, disabling compactions & flushes 2023-07-11 15:33:47,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:47,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:47,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. after waiting 0 ms 2023-07-11 15:33:47,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:47,810 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=cc720fab572ab2b0f6ca5eb34f6e1e6e, regionState=CLOSED 2023-07-11 15:33:47,810 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089627810"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089627810"}]},"ts":"1689089627810"} 2023-07-11 15:33:47,812 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=71 2023-07-11 15:33:47,812 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=71, state=SUCCESS; CloseRegionProcedure 8b088c93ef4c397f12ab4e9602e94524, server=jenkins-hbase9.apache.org,42495,1689089616669 in 178 msec 2023-07-11 15:33:47,817 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b088c93ef4c397f12ab4e9602e94524, UNASSIGN in 193 msec 2023-07-11 15:33:47,817 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=73 2023-07-11 15:33:47,817 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=73, state=SUCCESS; CloseRegionProcedure 54d6adb40b2e6374dffab57082261d1b, server=jenkins-hbase9.apache.org,42495,1689089616669 in 178 msec 2023-07-11 15:33:47,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:47,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7. 2023-07-11 15:33:47,818 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for b417194915b339c95a97a8b198ff8fa7: 2023-07-11 15:33:47,819 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=70 2023-07-11 15:33:47,819 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=54d6adb40b2e6374dffab57082261d1b, UNASSIGN in 198 msec 2023-07-11 15:33:47,819 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=70, state=SUCCESS; CloseRegionProcedure cc720fab572ab2b0f6ca5eb34f6e1e6e, server=jenkins-hbase9.apache.org,36133,1689089616857 in 186 msec 2023-07-11 15:33:47,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 8de5a620c5bdab0088653916fd6ce7df, disabling compactions & flushes 2023-07-11 15:33:47,821 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:47,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:47,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. after waiting 0 ms 2023-07-11 15:33:47,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:47,822 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=b417194915b339c95a97a8b198ff8fa7, regionState=CLOSED 2023-07-11 15:33:47,822 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689089627822"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089627822"}]},"ts":"1689089627822"} 2023-07-11 15:33:47,824 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc720fab572ab2b0f6ca5eb34f6e1e6e, UNASSIGN in 200 msec 2023-07-11 15:33:47,826 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=72 2023-07-11 15:33:47,827 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=72, state=SUCCESS; CloseRegionProcedure b417194915b339c95a97a8b198ff8fa7, server=jenkins-hbase9.apache.org,36133,1689089616857 in 194 msec 2023-07-11 15:33:47,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:47,828 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b417194915b339c95a97a8b198ff8fa7, UNASSIGN in 208 msec 2023-07-11 15:33:47,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df. 2023-07-11 15:33:47,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 8de5a620c5bdab0088653916fd6ce7df: 2023-07-11 15:33:47,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,830 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=8de5a620c5bdab0088653916fd6ce7df, regionState=CLOSED 2023-07-11 15:33:47,830 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689089627830"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089627830"}]},"ts":"1689089627830"} 2023-07-11 15:33:47,834 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=74 2023-07-11 15:33:47,834 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=74, state=SUCCESS; CloseRegionProcedure 8de5a620c5bdab0088653916fd6ce7df, server=jenkins-hbase9.apache.org,36133,1689089616857 in 200 msec 2023-07-11 15:33:47,835 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=69 2023-07-11 15:33:47,835 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8de5a620c5bdab0088653916fd6ce7df, UNASSIGN in 215 msec 2023-07-11 15:33:47,836 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089627836"}]},"ts":"1689089627836"} 2023-07-11 15:33:47,838 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-11 15:33:47,841 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-11 15:33:47,843 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 232 msec 2023-07-11 15:33:47,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-11 15:33:47,917 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 69 completed 2023-07-11 15:33:47,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:47,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:47,936 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:47,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_2018559090' 2023-07-11 15:33:47,938 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:47,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:47,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:47,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:47,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:47,953 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,953 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,953 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,956 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e/recovered.edits] 2023-07-11 15:33:47,957 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7/recovered.edits] 2023-07-11 15:33:47,958 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524/recovered.edits] 2023-07-11 15:33:47,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-11 15:33:47,959 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,959 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,964 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df/recovered.edits] 2023-07-11 15:33:47,966 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b/recovered.edits] 2023-07-11 15:33:47,970 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e/recovered.edits/4.seqid 2023-07-11 15:33:47,971 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc720fab572ab2b0f6ca5eb34f6e1e6e 2023-07-11 15:33:47,972 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524/recovered.edits/4.seqid 2023-07-11 15:33:47,973 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7/recovered.edits/4.seqid 2023-07-11 15:33:47,974 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b088c93ef4c397f12ab4e9602e94524 2023-07-11 15:33:47,974 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b417194915b339c95a97a8b198ff8fa7 2023-07-11 15:33:47,975 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b/recovered.edits/4.seqid 2023-07-11 15:33:47,976 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/54d6adb40b2e6374dffab57082261d1b 2023-07-11 15:33:47,976 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df/recovered.edits/4.seqid 2023-07-11 15:33:47,977 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8de5a620c5bdab0088653916fd6ce7df 2023-07-11 15:33:47,977 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-11 15:33:47,983 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:47,992 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-11 15:33:47,994 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-11 15:33:47,996 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:47,996 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-11 15:33:47,996 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089627996"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:47,997 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089627996"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:47,997 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089627996"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:47,997 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089627996"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:47,997 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089627996"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:47,999 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-11 15:33:48,000 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cc720fab572ab2b0f6ca5eb34f6e1e6e, NAME => 'Group_testTableMoveTruncateAndDrop,,1689089626557.cc720fab572ab2b0f6ca5eb34f6e1e6e.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 8b088c93ef4c397f12ab4e9602e94524, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689089626557.8b088c93ef4c397f12ab4e9602e94524.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => b417194915b339c95a97a8b198ff8fa7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689089626557.b417194915b339c95a97a8b198ff8fa7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 54d6adb40b2e6374dffab57082261d1b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689089626557.54d6adb40b2e6374dffab57082261d1b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 8de5a620c5bdab0088653916fd6ce7df, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689089626557.8de5a620c5bdab0088653916fd6ce7df.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-11 15:33:48,000 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-11 15:33:48,000 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689089628000"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:48,002 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-11 15:33:48,004 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 15:33:48,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 81 msec 2023-07-11 15:33:48,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-11 15:33:48,063 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 80 completed 2023-07-11 15:33:48,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:48,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:48,068 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42495] ipc.CallRunner(144): callId: 165 service: ClientService methodName: Scan size: 147 connection: 172.31.2.10:50064 deadline: 1689089688068, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=43957 startCode=1689089616370. As of locationSeqNum=6. 2023-07-11 15:33:48,173 DEBUG [hconnection-0x2a6672ab-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:33:48,175 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:41630, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:33:48,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:48,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:48,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:48,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495] to rsgroup default 2023-07-11 15:33:48,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:48,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:48,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_2018559090, current retry=0 2023-07-11 15:33:48,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669] are moved back to Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:48,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_2018559090 => default 2023-07-11 15:33:48,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:48,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_testTableMoveTruncateAndDrop_2018559090 2023-07-11 15:33:48,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 15:33:48,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:48,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:48,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:48,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:48,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:48,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:48,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:48,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:48,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:48,246 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:48,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:48,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:48,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:48,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:48,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:48,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090828264, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:48,265 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:48,268 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:48,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,270 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:48,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:48,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:48,319 INFO [Listener at localhost/45661] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=508 (was 419) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:43853 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-369846627-172.31.2.10-1689089610610:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp244092294-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49791@0x0bc50a61-SendThread(127.0.0.1:49791) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:3;jenkins-hbase9:45349 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:43853 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1751209828_17 at /127.0.0.1:47524 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_869744000_17 at /127.0.0.1:57810 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1d0a72d0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1903062719_17 at /127.0.0.1:47566 [Receiving block BP-369846627-172.31.2.10-1689089610610:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-369846627-172.31.2.10-1689089610610:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c-prefix:jenkins-hbase9.apache.org,45349,1689089620952.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c-prefix:jenkins-hbase9.apache.org,45349,1689089620952 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp244092294-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1903062719_17 at /127.0.0.1:47498 [Receiving block BP-369846627-172.31.2.10-1689089610610:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1751209828_17 at /127.0.0.1:39412 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49791@0x0bc50a61-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1903062719_17 at /127.0.0.1:57610 [Receiving block BP-369846627-172.31.2.10-1689089610610:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1439235440_17 at /127.0.0.1:47624 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp244092294-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1903062719_17 at /127.0.0.1:39372 [Receiving block BP-369846627-172.31.2.10-1689089610610:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1903062719_17 at /127.0.0.1:57656 [Receiving block BP-369846627-172.31.2.10-1689089610610:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-369846627-172.31.2.10-1689089610610:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1903062719_17 at /127.0.0.1:39450 [Receiving block BP-369846627-172.31.2.10-1689089610610:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp244092294-632 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp244092294-635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-369846627-172.31.2.10-1689089610610:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49791@0x0bc50a61 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/582576010.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-369846627-172.31.2.10-1689089610610:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp244092294-634 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase9:45349-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp244092294-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:45349Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp244092294-633-acceptor-0@65e45b43-ServerConnector@4f8a6a12{HTTP/1.1, (http/1.1)}{0.0.0.0:35705} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-369846627-172.31.2.10-1689089610610:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=812 (was 684) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=500 (was 509), ProcessCount=174 (was 178), AvailableMemoryMB=6795 (was 7333) 2023-07-11 15:33:48,321 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-11 15:33:48,343 INFO [Listener at localhost/45661] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=508, OpenFileDescriptor=812, MaxFileDescriptor=60000, SystemLoadAverage=500, ProcessCount=174, AvailableMemoryMB=6792 2023-07-11 15:33:48,343 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-11 15:33:48,344 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-11 15:33:48,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:48,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:48,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:48,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:48,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:48,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:48,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:48,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:48,366 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:48,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:48,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:48,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:48,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:48,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:48,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090828381, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:48,382 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:48,385 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:48,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,386 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:48,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:48,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:48,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup foo* 2023-07-11 15:33:48,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:48,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.2.10:55202 deadline: 1689090828388, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-11 15:33:48,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup foo@ 2023-07-11 15:33:48,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:48,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.2.10:55202 deadline: 1689090828390, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-11 15:33:48,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup - 2023-07-11 15:33:48,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:48,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.2.10:55202 deadline: 1689090828398, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-11 15:33:48,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup foo_123 2023-07-11 15:33:48,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-11 15:33:48,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:48,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:48,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:48,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:48,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:48,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:48,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:48,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup foo_123 2023-07-11 15:33:48,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 15:33:48,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:48,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:48,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:48,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:48,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:48,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:48,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:48,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:48,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:48,461 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:48,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:48,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:48,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:48,481 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,481 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:48,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:48,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090828484, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:48,485 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:48,487 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:48,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,489 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:48,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:48,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:48,512 INFO [Listener at localhost/45661] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=511 (was 508) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=812 (was 812), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=500 (was 500), ProcessCount=174 (was 174), AvailableMemoryMB=6782 (was 6792) 2023-07-11 15:33:48,512 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-11 15:33:48,537 INFO [Listener at localhost/45661] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=511, OpenFileDescriptor=812, MaxFileDescriptor=60000, SystemLoadAverage=500, ProcessCount=174, AvailableMemoryMB=6772 2023-07-11 15:33:48,537 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-11 15:33:48,538 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-11 15:33:48,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:48,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:48,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:48,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:48,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:48,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:48,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:48,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:48,560 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:48,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:48,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:48,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:48,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:48,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:48,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090828585, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:48,587 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:48,589 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:48,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,590 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:48,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:48,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:48,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:48,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:48,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup bar 2023-07-11 15:33:48,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 15:33:48,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:48,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:48,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:48,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:48,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:42495] to rsgroup bar 2023-07-11 15:33:48,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:48,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 15:33:48,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:48,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:48,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(238): Moving server region db11ce5f2f749a24653755c2ee31ecfe, which do not belong to RSGroup bar 2023-07-11 15:33:48,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, REOPEN/MOVE 2023-07-11 15:33:48,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-11 15:33:48,629 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, REOPEN/MOVE 2023-07-11 15:33:48,630 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:48,630 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089628630"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089628630"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089628630"}]},"ts":"1689089628630"} 2023-07-11 15:33:48,632 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:48,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:48,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing db11ce5f2f749a24653755c2ee31ecfe, disabling compactions & flushes 2023-07-11 15:33:48,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:48,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:48,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. after waiting 0 ms 2023-07-11 15:33:48,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:48,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-11 15:33:48,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:48,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for db11ce5f2f749a24653755c2ee31ecfe: 2023-07-11 15:33:48,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding db11ce5f2f749a24653755c2ee31ecfe move to jenkins-hbase9.apache.org,45349,1689089620952 record at close sequenceid=10 2023-07-11 15:33:48,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:48,818 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=CLOSED 2023-07-11 15:33:48,818 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089628818"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089628818"}]},"ts":"1689089628818"} 2023-07-11 15:33:48,824 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-11 15:33:48,824 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,43957,1689089616370 in 188 msec 2023-07-11 15:33:48,825 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:48,976 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:48,976 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089628976"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089628976"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089628976"}]},"ts":"1689089628976"} 2023-07-11 15:33:48,979 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:49,136 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:49,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => db11ce5f2f749a24653755c2ee31ecfe, NAME => 'hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:49,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:49,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:49,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:49,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:49,139 INFO [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:49,141 DEBUG [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info 2023-07-11 15:33:49,141 DEBUG [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info 2023-07-11 15:33:49,142 INFO [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region db11ce5f2f749a24653755c2ee31ecfe columnFamilyName info 2023-07-11 15:33:49,154 DEBUG [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] regionserver.HStore(539): loaded hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/info/8f2aba012e7641a099d18b10060f2fd8 2023-07-11 15:33:49,154 INFO [StoreOpener-db11ce5f2f749a24653755c2ee31ecfe-1] regionserver.HStore(310): Store=db11ce5f2f749a24653755c2ee31ecfe/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:49,155 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:49,157 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:49,161 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:33:49,162 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened db11ce5f2f749a24653755c2ee31ecfe; next sequenceid=13; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10044119360, jitterRate=-0.06456848978996277}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:49,162 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for db11ce5f2f749a24653755c2ee31ecfe: 2023-07-11 15:33:49,163 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe., pid=83, masterSystemTime=1689089629131 2023-07-11 15:33:49,165 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:49,165 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:33:49,166 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=db11ce5f2f749a24653755c2ee31ecfe, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:49,166 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089629165"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089629165"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089629165"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089629165"}]},"ts":"1689089629165"} 2023-07-11 15:33:49,169 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-11 15:33:49,170 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure db11ce5f2f749a24653755c2ee31ecfe, server=jenkins-hbase9.apache.org,45349,1689089620952 in 189 msec 2023-07-11 15:33:49,171 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=db11ce5f2f749a24653755c2ee31ecfe, REOPEN/MOVE in 543 msec 2023-07-11 15:33:49,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-11 15:33:49,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669, jenkins-hbase9.apache.org,43957,1689089616370] are moved back to default 2023-07-11 15:33:49,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-11 15:33:49,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:49,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:49,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:49,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=bar 2023-07-11 15:33:49,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:49,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:49,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-11 15:33:49,649 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:33:49,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-11 15:33:49,651 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:49,652 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 15:33:49,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-11 15:33:49,653 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:49,653 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:49,656 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:33:49,658 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:49,659 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d empty. 2023-07-11 15:33:49,660 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:49,660 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-11 15:33:49,681 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:49,682 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 37b4399585607ed8ee94477d3f53cf7d, NAME => 'Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:49,706 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:49,706 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 37b4399585607ed8ee94477d3f53cf7d, disabling compactions & flushes 2023-07-11 15:33:49,706 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:49,706 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:49,706 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. after waiting 0 ms 2023-07-11 15:33:49,706 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:49,706 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:49,706 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 37b4399585607ed8ee94477d3f53cf7d: 2023-07-11 15:33:49,711 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:33:49,712 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089629712"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089629712"}]},"ts":"1689089629712"} 2023-07-11 15:33:49,713 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:33:49,714 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:33:49,714 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089629714"}]},"ts":"1689089629714"} 2023-07-11 15:33:49,716 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-11 15:33:49,726 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, ASSIGN}] 2023-07-11 15:33:49,728 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, ASSIGN 2023-07-11 15:33:49,729 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:49,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-11 15:33:49,881 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:49,881 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089629881"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089629881"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089629881"}]},"ts":"1689089629881"} 2023-07-11 15:33:49,883 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:49,952 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-11 15:33:49,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-11 15:33:50,041 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 37b4399585607ed8ee94477d3f53cf7d, NAME => 'Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:50,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:50,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,046 INFO [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,055 DEBUG [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/f 2023-07-11 15:33:50,055 DEBUG [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/f 2023-07-11 15:33:50,055 INFO [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 37b4399585607ed8ee94477d3f53cf7d columnFamilyName f 2023-07-11 15:33:50,057 INFO [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] regionserver.HStore(310): Store=37b4399585607ed8ee94477d3f53cf7d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:50,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:50,078 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 37b4399585607ed8ee94477d3f53cf7d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12028423680, jitterRate=0.12023425102233887}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:50,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 37b4399585607ed8ee94477d3f53cf7d: 2023-07-11 15:33:50,082 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d., pid=86, masterSystemTime=1689089630036 2023-07-11 15:33:50,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,095 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,098 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:50,098 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089630098"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089630098"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089630098"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089630098"}]},"ts":"1689089630098"} 2023-07-11 15:33:50,105 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-11 15:33:50,106 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,45349,1689089620952 in 217 msec 2023-07-11 15:33:50,108 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-11 15:33:50,109 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, ASSIGN in 379 msec 2023-07-11 15:33:50,109 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:33:50,109 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089630109"}]},"ts":"1689089630109"} 2023-07-11 15:33:50,111 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-11 15:33:50,114 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:33:50,116 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 474 msec 2023-07-11 15:33:50,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-11 15:33:50,258 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-11 15:33:50,259 DEBUG [Listener at localhost/45661] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-11 15:33:50,259 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:50,271 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-11 15:33:50,271 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:50,271 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-11 15:33:50,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-11 15:33:50,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:50,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 15:33:50,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:50,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:50,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-11 15:33:50,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region 37b4399585607ed8ee94477d3f53cf7d to RSGroup bar 2023-07-11 15:33:50,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:50,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:50,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:50,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:50,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-11 15:33:50,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:50,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, REOPEN/MOVE 2023-07-11 15:33:50,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-11 15:33:50,300 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, REOPEN/MOVE 2023-07-11 15:33:50,301 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:50,301 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089630301"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089630301"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089630301"}]},"ts":"1689089630301"} 2023-07-11 15:33:50,317 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:50,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,480 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 37b4399585607ed8ee94477d3f53cf7d, disabling compactions & flushes 2023-07-11 15:33:50,480 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. after waiting 0 ms 2023-07-11 15:33:50,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,486 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:50,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,486 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 37b4399585607ed8ee94477d3f53cf7d: 2023-07-11 15:33:50,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 37b4399585607ed8ee94477d3f53cf7d move to jenkins-hbase9.apache.org,36133,1689089616857 record at close sequenceid=2 2023-07-11 15:33:50,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,494 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=CLOSED 2023-07-11 15:33:50,494 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089630494"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089630494"}]},"ts":"1689089630494"} 2023-07-11 15:33:50,498 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-11 15:33:50,499 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,45349,1689089620952 in 192 msec 2023-07-11 15:33:50,500 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,36133,1689089616857; forceNewPlan=false, retain=false 2023-07-11 15:33:50,650 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:33:50,651 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:50,651 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089630651"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089630651"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089630651"}]},"ts":"1689089630651"} 2023-07-11 15:33:50,653 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:50,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 37b4399585607ed8ee94477d3f53cf7d, NAME => 'Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:50,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:50,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,813 INFO [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,815 DEBUG [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/f 2023-07-11 15:33:50,815 DEBUG [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/f 2023-07-11 15:33:50,816 INFO [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 37b4399585607ed8ee94477d3f53cf7d columnFamilyName f 2023-07-11 15:33:50,817 INFO [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] regionserver.HStore(310): Store=37b4399585607ed8ee94477d3f53cf7d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:50,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:50,823 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 37b4399585607ed8ee94477d3f53cf7d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10276733280, jitterRate=-0.042904630303382874}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:50,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 37b4399585607ed8ee94477d3f53cf7d: 2023-07-11 15:33:50,824 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d., pid=89, masterSystemTime=1689089630805 2023-07-11 15:33:50,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,827 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:50,827 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:50,828 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089630827"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089630827"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089630827"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089630827"}]},"ts":"1689089630827"} 2023-07-11 15:33:50,830 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-11 15:33:50,831 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,36133,1689089616857 in 176 msec 2023-07-11 15:33:50,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, REOPEN/MOVE in 533 msec 2023-07-11 15:33:51,140 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-11 15:33:51,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-11 15:33:51,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-11 15:33:51,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:51,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:51,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:51,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=bar 2023-07-11 15:33:51,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:51,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup bar 2023-07-11 15:33:51,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:51,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.2.10:55202 deadline: 1689090831312, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-11 15:33:51,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:42495] to rsgroup default 2023-07-11 15:33:51,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:51,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.2.10:55202 deadline: 1689090831316, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-11 15:33:51,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-11 15:33:51,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:51,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 15:33:51,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:51,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:51,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-11 15:33:51,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region 37b4399585607ed8ee94477d3f53cf7d to RSGroup default 2023-07-11 15:33:51,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, REOPEN/MOVE 2023-07-11 15:33:51,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-11 15:33:51,328 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, REOPEN/MOVE 2023-07-11 15:33:51,330 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:51,330 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089631330"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089631330"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089631330"}]},"ts":"1689089631330"} 2023-07-11 15:33:51,332 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:51,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:51,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 37b4399585607ed8ee94477d3f53cf7d, disabling compactions & flushes 2023-07-11 15:33:51,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:51,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:51,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. after waiting 0 ms 2023-07-11 15:33:51,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:51,492 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:33:51,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:51,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 37b4399585607ed8ee94477d3f53cf7d: 2023-07-11 15:33:51,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 37b4399585607ed8ee94477d3f53cf7d move to jenkins-hbase9.apache.org,45349,1689089620952 record at close sequenceid=5 2023-07-11 15:33:51,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:51,496 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=CLOSED 2023-07-11 15:33:51,496 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089631496"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089631496"}]},"ts":"1689089631496"} 2023-07-11 15:33:51,499 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-11 15:33:51,499 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,36133,1689089616857 in 165 msec 2023-07-11 15:33:51,500 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:51,650 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:51,651 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089631650"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089631650"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089631650"}]},"ts":"1689089631650"} 2023-07-11 15:33:51,653 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:51,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:51,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 37b4399585607ed8ee94477d3f53cf7d, NAME => 'Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:51,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:51,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:51,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:51,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:51,817 INFO [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:51,818 DEBUG [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/f 2023-07-11 15:33:51,818 DEBUG [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/f 2023-07-11 15:33:51,819 INFO [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 37b4399585607ed8ee94477d3f53cf7d columnFamilyName f 2023-07-11 15:33:51,820 INFO [StoreOpener-37b4399585607ed8ee94477d3f53cf7d-1] regionserver.HStore(310): Store=37b4399585607ed8ee94477d3f53cf7d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:51,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:51,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:51,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:51,826 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 37b4399585607ed8ee94477d3f53cf7d; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9983396960, jitterRate=-0.07022370398044586}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:51,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 37b4399585607ed8ee94477d3f53cf7d: 2023-07-11 15:33:51,827 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d., pid=92, masterSystemTime=1689089631805 2023-07-11 15:33:51,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:51,829 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:51,829 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:51,829 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089631829"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089631829"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089631829"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089631829"}]},"ts":"1689089631829"} 2023-07-11 15:33:51,832 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-11 15:33:51,832 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,45349,1689089620952 in 177 msec 2023-07-11 15:33:51,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, REOPEN/MOVE in 506 msec 2023-07-11 15:33:52,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-11 15:33:52,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-11 15:33:52,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:52,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:52,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:52,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup bar 2023-07-11 15:33:52,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:52,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.2.10:55202 deadline: 1689090832335, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-11 15:33:52,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:42495] to rsgroup default 2023-07-11 15:33:52,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:52,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 15:33:52,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:52,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:52,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-11 15:33:52,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669, jenkins-hbase9.apache.org,43957,1689089616370] are moved back to bar 2023-07-11 15:33:52,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-11 15:33:52,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:52,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:52,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:52,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup bar 2023-07-11 15:33:52,356 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43957] ipc.CallRunner(144): callId: 220 service: ClientService methodName: Scan size: 147 connection: 172.31.2.10:41630 deadline: 1689089692355, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase9.apache.org port=45349 startCode=1689089620952. As of locationSeqNum=10. 2023-07-11 15:33:52,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:52,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:52,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 15:33:52,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:52,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:52,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:52,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:52,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:52,476 INFO [Listener at localhost/45661] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-11 15:33:52,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testFailRemoveGroup 2023-07-11 15:33:52,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-11 15:33:52,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-11 15:33:52,480 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089632480"}]},"ts":"1689089632480"} 2023-07-11 15:33:52,482 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-11 15:33:52,485 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-11 15:33:52,486 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, UNASSIGN}] 2023-07-11 15:33:52,487 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, UNASSIGN 2023-07-11 15:33:52,488 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:52,488 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089632488"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089632488"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089632488"}]},"ts":"1689089632488"} 2023-07-11 15:33:52,490 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:52,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-11 15:33:52,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:52,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 37b4399585607ed8ee94477d3f53cf7d, disabling compactions & flushes 2023-07-11 15:33:52,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:52,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:52,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. after waiting 0 ms 2023-07-11 15:33:52,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:52,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-11 15:33:52,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d. 2023-07-11 15:33:52,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 37b4399585607ed8ee94477d3f53cf7d: 2023-07-11 15:33:52,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:52,653 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=37b4399585607ed8ee94477d3f53cf7d, regionState=CLOSED 2023-07-11 15:33:52,653 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689089632653"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089632653"}]},"ts":"1689089632653"} 2023-07-11 15:33:52,658 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-11 15:33:52,658 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure 37b4399585607ed8ee94477d3f53cf7d, server=jenkins-hbase9.apache.org,45349,1689089620952 in 166 msec 2023-07-11 15:33:52,661 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-11 15:33:52,661 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=37b4399585607ed8ee94477d3f53cf7d, UNASSIGN in 172 msec 2023-07-11 15:33:52,662 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089632661"}]},"ts":"1689089632661"} 2023-07-11 15:33:52,663 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-11 15:33:52,669 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-11 15:33:52,671 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 194 msec 2023-07-11 15:33:52,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-11 15:33:52,782 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-11 15:33:52,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testFailRemoveGroup 2023-07-11 15:33:52,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 15:33:52,786 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 15:33:52,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-11 15:33:52,787 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 15:33:52,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:52,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:52,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:52,792 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:52,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-11 15:33:52,794 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/recovered.edits] 2023-07-11 15:33:52,799 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/recovered.edits/10.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d/recovered.edits/10.seqid 2023-07-11 15:33:52,800 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testFailRemoveGroup/37b4399585607ed8ee94477d3f53cf7d 2023-07-11 15:33:52,800 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-11 15:33:52,802 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 15:33:52,806 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-11 15:33:52,808 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-11 15:33:52,809 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 15:33:52,809 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-11 15:33:52,809 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089632809"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:52,811 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 15:33:52,811 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 37b4399585607ed8ee94477d3f53cf7d, NAME => 'Group_testFailRemoveGroup,,1689089629640.37b4399585607ed8ee94477d3f53cf7d.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 15:33:52,811 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-11 15:33:52,811 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689089632811"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:52,813 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-11 15:33:52,815 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 15:33:52,817 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 32 msec 2023-07-11 15:33:52,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-11 15:33:52,894 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-11 15:33:52,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:52,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:52,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:52,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:52,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:52,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:52,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:52,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:52,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:52,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:52,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:52,924 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:52,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:52,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:52,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:52,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:52,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:52,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:52,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:52,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:52,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:52,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090832939, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:52,940 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:52,942 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:52,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:52,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:52,943 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:52,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:52,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:52,967 INFO [Listener at localhost/45661] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=513 (was 511) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1903062719_17 at /127.0.0.1:44162 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1751209828_17 at /127.0.0.1:39412 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1952263126_17 at /127.0.0.1:47624 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1952263126_17 at /127.0.0.1:38086 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2ad2b5e2-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=811 (was 812), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=500 (was 500), ProcessCount=177 (was 174) - ProcessCount LEAK? -, AvailableMemoryMB=6529 (was 6772) 2023-07-11 15:33:52,968 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-11 15:33:52,988 INFO [Listener at localhost/45661] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=513, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=500, ProcessCount=177, AvailableMemoryMB=6518 2023-07-11 15:33:52,988 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-11 15:33:52,988 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-11 15:33:52,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:52,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:52,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:52,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:52,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:52,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:52,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:53,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:53,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:53,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:53,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:53,018 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:53,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:53,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:53,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:53,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:53,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:53,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:53,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:53,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:53,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:53,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090833054, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:53,055 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:53,061 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:53,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:53,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:53,063 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:53,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:53,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:53,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:53,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:53,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_testMultiTableMove_1934420880 2023-07-11 15:33:53,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1934420880 2023-07-11 15:33:53,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:53,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:53,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:53,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:53,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:53,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:53,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133] to rsgroup Group_testMultiTableMove_1934420880 2023-07-11 15:33:53,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1934420880 2023-07-11 15:33:53,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:53,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:53,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:53,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 15:33:53,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857] are moved back to default 2023-07-11 15:33:53,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1934420880 2023-07-11 15:33:53,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:53,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:53,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:53,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1934420880 2023-07-11 15:33:53,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:53,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:53,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 15:33:53,101 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:33:53,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-11 15:33:53,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-11 15:33:53,106 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1934420880 2023-07-11 15:33:53,106 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:53,107 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:53,107 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:53,112 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:33:53,115 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:53,115 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee empty. 2023-07-11 15:33:53,116 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:53,116 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-11 15:33:53,139 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:53,141 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6836cbfa8c240cce30ce00220f8fbeee, NAME => 'GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:53,160 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:53,160 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 6836cbfa8c240cce30ce00220f8fbeee, disabling compactions & flushes 2023-07-11 15:33:53,160 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:53,160 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:53,160 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. after waiting 0 ms 2023-07-11 15:33:53,160 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:53,160 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:53,160 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 6836cbfa8c240cce30ce00220f8fbeee: 2023-07-11 15:33:53,167 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:33:53,168 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089633168"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089633168"}]},"ts":"1689089633168"} 2023-07-11 15:33:53,170 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:33:53,171 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:33:53,171 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089633171"}]},"ts":"1689089633171"} 2023-07-11 15:33:53,173 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-11 15:33:53,177 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:53,177 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:53,177 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:53,177 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:53,177 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:53,178 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, ASSIGN}] 2023-07-11 15:33:53,180 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, ASSIGN 2023-07-11 15:33:53,188 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:53,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-11 15:33:53,316 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-11 15:33:53,338 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:33:53,340 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=6836cbfa8c240cce30ce00220f8fbeee, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:53,340 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089633340"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089633340"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089633340"}]},"ts":"1689089633340"} 2023-07-11 15:33:53,342 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 6836cbfa8c240cce30ce00220f8fbeee, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:53,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-11 15:33:53,500 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:53,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6836cbfa8c240cce30ce00220f8fbeee, NAME => 'GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:53,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:53,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:53,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:53,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:53,506 INFO [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:53,508 DEBUG [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/f 2023-07-11 15:33:53,508 DEBUG [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/f 2023-07-11 15:33:53,508 INFO [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6836cbfa8c240cce30ce00220f8fbeee columnFamilyName f 2023-07-11 15:33:53,509 INFO [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] regionserver.HStore(310): Store=6836cbfa8c240cce30ce00220f8fbeee/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:53,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:53,511 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:53,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:53,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:53,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 6836cbfa8c240cce30ce00220f8fbeee; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11905200480, jitterRate=0.10875819623470306}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:53,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 6836cbfa8c240cce30ce00220f8fbeee: 2023-07-11 15:33:53,520 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee., pid=99, masterSystemTime=1689089633496 2023-07-11 15:33:53,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:53,522 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:53,523 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=6836cbfa8c240cce30ce00220f8fbeee, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:53,523 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089633523"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089633523"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089633523"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089633523"}]},"ts":"1689089633523"} 2023-07-11 15:33:53,526 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-11 15:33:53,526 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 6836cbfa8c240cce30ce00220f8fbeee, server=jenkins-hbase9.apache.org,45349,1689089620952 in 182 msec 2023-07-11 15:33:53,528 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-11 15:33:53,528 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, ASSIGN in 348 msec 2023-07-11 15:33:53,529 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:33:53,529 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089633529"}]},"ts":"1689089633529"} 2023-07-11 15:33:53,530 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-11 15:33:53,534 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:33:53,537 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 437 msec 2023-07-11 15:33:53,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-11 15:33:53,707 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-11 15:33:53,707 DEBUG [Listener at localhost/45661] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-11 15:33:53,707 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:53,712 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-11 15:33:53,713 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:53,713 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-11 15:33:53,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:53,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 15:33:53,719 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:33:53,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-11 15:33:53,723 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1934420880 2023-07-11 15:33:53,723 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:53,724 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:53,724 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:53,726 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:33:53,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-11 15:33:53,728 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:53,729 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5 empty. 2023-07-11 15:33:53,730 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:53,730 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-11 15:33:53,772 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:53,773 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3745a79e3f352ef0234891fb8fbe49e5, NAME => 'GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:53,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:53,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 3745a79e3f352ef0234891fb8fbe49e5, disabling compactions & flushes 2023-07-11 15:33:53,817 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:53,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:53,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. after waiting 0 ms 2023-07-11 15:33:53,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:53,817 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:53,817 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 3745a79e3f352ef0234891fb8fbe49e5: 2023-07-11 15:33:53,825 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:33:53,826 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089633826"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089633826"}]},"ts":"1689089633826"} 2023-07-11 15:33:53,828 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:33:53,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-11 15:33:53,829 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:33:53,830 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089633829"}]},"ts":"1689089633829"} 2023-07-11 15:33:53,831 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-11 15:33:53,836 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:53,836 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:53,836 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:53,836 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:53,836 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:53,836 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, ASSIGN}] 2023-07-11 15:33:53,839 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, ASSIGN 2023-07-11 15:33:53,840 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:33:53,990 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:33:53,992 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=3745a79e3f352ef0234891fb8fbe49e5, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:53,992 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089633992"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089633992"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089633992"}]},"ts":"1689089633992"} 2023-07-11 15:33:53,994 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure 3745a79e3f352ef0234891fb8fbe49e5, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:54,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-11 15:33:54,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:54,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3745a79e3f352ef0234891fb8fbe49e5, NAME => 'GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:54,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:54,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:54,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:54,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:54,153 INFO [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:54,155 DEBUG [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/f 2023-07-11 15:33:54,155 DEBUG [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/f 2023-07-11 15:33:54,156 INFO [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3745a79e3f352ef0234891fb8fbe49e5 columnFamilyName f 2023-07-11 15:33:54,156 INFO [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] regionserver.HStore(310): Store=3745a79e3f352ef0234891fb8fbe49e5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:54,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:54,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:54,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:54,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:54,163 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 3745a79e3f352ef0234891fb8fbe49e5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9735631840, jitterRate=-0.09329862892627716}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:54,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 3745a79e3f352ef0234891fb8fbe49e5: 2023-07-11 15:33:54,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5., pid=102, masterSystemTime=1689089634147 2023-07-11 15:33:54,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:54,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:54,166 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=3745a79e3f352ef0234891fb8fbe49e5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:54,166 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089634166"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089634166"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089634166"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089634166"}]},"ts":"1689089634166"} 2023-07-11 15:33:54,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-11 15:33:54,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure 3745a79e3f352ef0234891fb8fbe49e5, server=jenkins-hbase9.apache.org,43957,1689089616370 in 173 msec 2023-07-11 15:33:54,170 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-11 15:33:54,171 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, ASSIGN in 333 msec 2023-07-11 15:33:54,171 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:33:54,171 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089634171"}]},"ts":"1689089634171"} 2023-07-11 15:33:54,173 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-11 15:33:54,176 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:33:54,179 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 461 msec 2023-07-11 15:33:54,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-11 15:33:54,332 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-11 15:33:54,332 DEBUG [Listener at localhost/45661] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-11 15:33:54,332 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:54,338 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-11 15:33:54,338 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:54,338 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-11 15:33:54,338 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:54,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-11 15:33:54,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:33:54,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-11 15:33:54,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:33:54,641 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1934420880 2023-07-11 15:33:54,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1934420880 2023-07-11 15:33:54,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1934420880 2023-07-11 15:33:54,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:54,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:54,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:54,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1934420880 2023-07-11 15:33:54,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region 3745a79e3f352ef0234891fb8fbe49e5 to RSGroup Group_testMultiTableMove_1934420880 2023-07-11 15:33:54,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, REOPEN/MOVE 2023-07-11 15:33:54,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1934420880 2023-07-11 15:33:54,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region 6836cbfa8c240cce30ce00220f8fbeee to RSGroup Group_testMultiTableMove_1934420880 2023-07-11 15:33:54,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, REOPEN/MOVE 2023-07-11 15:33:54,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1934420880, current retry=0 2023-07-11 15:33:54,675 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, REOPEN/MOVE 2023-07-11 15:33:54,676 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, REOPEN/MOVE 2023-07-11 15:33:54,679 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=3745a79e3f352ef0234891fb8fbe49e5, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:54,679 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=6836cbfa8c240cce30ce00220f8fbeee, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:54,679 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089634679"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089634679"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089634679"}]},"ts":"1689089634679"} 2023-07-11 15:33:54,679 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089634679"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089634679"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089634679"}]},"ts":"1689089634679"} 2023-07-11 15:33:54,681 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure 3745a79e3f352ef0234891fb8fbe49e5, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:54,682 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure 6836cbfa8c240cce30ce00220f8fbeee, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:54,834 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:54,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 3745a79e3f352ef0234891fb8fbe49e5, disabling compactions & flushes 2023-07-11 15:33:54,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:54,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:54,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. after waiting 0 ms 2023-07-11 15:33:54,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:54,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:54,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 6836cbfa8c240cce30ce00220f8fbeee, disabling compactions & flushes 2023-07-11 15:33:54,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:54,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:54,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. after waiting 0 ms 2023-07-11 15:33:54,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:54,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:54,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:54,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 3745a79e3f352ef0234891fb8fbe49e5: 2023-07-11 15:33:54,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 3745a79e3f352ef0234891fb8fbe49e5 move to jenkins-hbase9.apache.org,36133,1689089616857 record at close sequenceid=2 2023-07-11 15:33:54,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:54,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:54,847 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=3745a79e3f352ef0234891fb8fbe49e5, regionState=CLOSED 2023-07-11 15:33:54,847 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089634847"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089634847"}]},"ts":"1689089634847"} 2023-07-11 15:33:54,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:54,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 6836cbfa8c240cce30ce00220f8fbeee: 2023-07-11 15:33:54,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 6836cbfa8c240cce30ce00220f8fbeee move to jenkins-hbase9.apache.org,36133,1689089616857 record at close sequenceid=2 2023-07-11 15:33:54,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:54,850 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-11 15:33:54,850 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure 3745a79e3f352ef0234891fb8fbe49e5, server=jenkins-hbase9.apache.org,43957,1689089616370 in 167 msec 2023-07-11 15:33:54,851 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,36133,1689089616857; forceNewPlan=false, retain=false 2023-07-11 15:33:54,856 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=6836cbfa8c240cce30ce00220f8fbeee, regionState=CLOSED 2023-07-11 15:33:54,856 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089634856"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089634856"}]},"ts":"1689089634856"} 2023-07-11 15:33:54,859 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-11 15:33:54,859 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure 6836cbfa8c240cce30ce00220f8fbeee, server=jenkins-hbase9.apache.org,45349,1689089620952 in 175 msec 2023-07-11 15:33:54,860 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,36133,1689089616857; forceNewPlan=false, retain=false 2023-07-11 15:33:55,001 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=3745a79e3f352ef0234891fb8fbe49e5, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:55,001 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=6836cbfa8c240cce30ce00220f8fbeee, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:55,002 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089635001"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089635001"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089635001"}]},"ts":"1689089635001"} 2023-07-11 15:33:55,002 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089635001"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089635001"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089635001"}]},"ts":"1689089635001"} 2023-07-11 15:33:55,003 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=103, state=RUNNABLE; OpenRegionProcedure 3745a79e3f352ef0234891fb8fbe49e5, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:55,004 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=104, state=RUNNABLE; OpenRegionProcedure 6836cbfa8c240cce30ce00220f8fbeee, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:55,163 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:55,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3745a79e3f352ef0234891fb8fbe49e5, NAME => 'GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:55,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:55,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:55,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:55,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:55,166 INFO [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:55,167 DEBUG [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/f 2023-07-11 15:33:55,167 DEBUG [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/f 2023-07-11 15:33:55,167 INFO [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3745a79e3f352ef0234891fb8fbe49e5 columnFamilyName f 2023-07-11 15:33:55,168 INFO [StoreOpener-3745a79e3f352ef0234891fb8fbe49e5-1] regionserver.HStore(310): Store=3745a79e3f352ef0234891fb8fbe49e5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:55,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:55,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:55,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:55,175 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 3745a79e3f352ef0234891fb8fbe49e5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11758454240, jitterRate=0.09509138762950897}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:55,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 3745a79e3f352ef0234891fb8fbe49e5: 2023-07-11 15:33:55,176 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5., pid=107, masterSystemTime=1689089635159 2023-07-11 15:33:55,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:55,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:55,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:55,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6836cbfa8c240cce30ce00220f8fbeee, NAME => 'GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:55,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:55,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:55,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:55,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:55,179 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=3745a79e3f352ef0234891fb8fbe49e5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:55,180 INFO [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:55,180 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089635179"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089635179"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089635179"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089635179"}]},"ts":"1689089635179"} 2023-07-11 15:33:55,181 DEBUG [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/f 2023-07-11 15:33:55,181 DEBUG [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/f 2023-07-11 15:33:55,182 INFO [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6836cbfa8c240cce30ce00220f8fbeee columnFamilyName f 2023-07-11 15:33:55,183 INFO [StoreOpener-6836cbfa8c240cce30ce00220f8fbeee-1] regionserver.HStore(310): Store=6836cbfa8c240cce30ce00220f8fbeee/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:55,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:55,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:55,185 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=103 2023-07-11 15:33:55,185 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=103, state=SUCCESS; OpenRegionProcedure 3745a79e3f352ef0234891fb8fbe49e5, server=jenkins-hbase9.apache.org,36133,1689089616857 in 178 msec 2023-07-11 15:33:55,188 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, REOPEN/MOVE in 518 msec 2023-07-11 15:33:55,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:55,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 6836cbfa8c240cce30ce00220f8fbeee; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11927628480, jitterRate=0.11084696650505066}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:55,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 6836cbfa8c240cce30ce00220f8fbeee: 2023-07-11 15:33:55,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee., pid=108, masterSystemTime=1689089635159 2023-07-11 15:33:55,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:55,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:55,194 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=6836cbfa8c240cce30ce00220f8fbeee, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:55,194 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089635193"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089635193"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089635193"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089635193"}]},"ts":"1689089635193"} 2023-07-11 15:33:55,198 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=104 2023-07-11 15:33:55,198 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=104, state=SUCCESS; OpenRegionProcedure 6836cbfa8c240cce30ce00220f8fbeee, server=jenkins-hbase9.apache.org,36133,1689089616857 in 192 msec 2023-07-11 15:33:55,203 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, REOPEN/MOVE in 527 msec 2023-07-11 15:33:55,545 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-11 15:33:55,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-11 15:33:55,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1934420880. 2023-07-11 15:33:55,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:55,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:55,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:55,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-11 15:33:55,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:33:55,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-11 15:33:55,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:33:55,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:55,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:55,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1934420880 2023-07-11 15:33:55,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:55,691 INFO [Listener at localhost/45661] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-11 15:33:55,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable GrouptestMultiTableMoveA 2023-07-11 15:33:55,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 15:33:55,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-11 15:33:55,702 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089635702"}]},"ts":"1689089635702"} 2023-07-11 15:33:55,703 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-11 15:33:55,706 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-11 15:33:55,707 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, UNASSIGN}] 2023-07-11 15:33:55,711 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, UNASSIGN 2023-07-11 15:33:55,712 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=6836cbfa8c240cce30ce00220f8fbeee, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:55,712 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089635712"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089635712"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089635712"}]},"ts":"1689089635712"} 2023-07-11 15:33:55,714 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure 6836cbfa8c240cce30ce00220f8fbeee, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:55,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-11 15:33:55,866 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:55,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 6836cbfa8c240cce30ce00220f8fbeee, disabling compactions & flushes 2023-07-11 15:33:55,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:55,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:55,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. after waiting 0 ms 2023-07-11 15:33:55,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:55,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:33:55,877 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee. 2023-07-11 15:33:55,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 6836cbfa8c240cce30ce00220f8fbeee: 2023-07-11 15:33:55,879 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:55,880 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=6836cbfa8c240cce30ce00220f8fbeee, regionState=CLOSED 2023-07-11 15:33:55,880 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089635880"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089635880"}]},"ts":"1689089635880"} 2023-07-11 15:33:55,884 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-11 15:33:55,884 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure 6836cbfa8c240cce30ce00220f8fbeee, server=jenkins-hbase9.apache.org,36133,1689089616857 in 168 msec 2023-07-11 15:33:55,887 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-11 15:33:55,887 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=6836cbfa8c240cce30ce00220f8fbeee, UNASSIGN in 179 msec 2023-07-11 15:33:55,888 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089635888"}]},"ts":"1689089635888"} 2023-07-11 15:33:55,890 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-11 15:33:55,891 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-11 15:33:55,894 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 196 msec 2023-07-11 15:33:56,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-11 15:33:56,003 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-11 15:33:56,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete GrouptestMultiTableMoveA 2023-07-11 15:33:56,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 15:33:56,007 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 15:33:56,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1934420880' 2023-07-11 15:33:56,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1934420880 2023-07-11 15:33:56,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,010 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 15:33:56,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:56,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-11 15:33:56,014 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:56,016 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/recovered.edits] 2023-07-11 15:33:56,022 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/recovered.edits/7.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee/recovered.edits/7.seqid 2023-07-11 15:33:56,022 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveA/6836cbfa8c240cce30ce00220f8fbeee 2023-07-11 15:33:56,022 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-11 15:33:56,025 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 15:33:56,027 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-11 15:33:56,029 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-11 15:33:56,030 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 15:33:56,030 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-11 15:33:56,030 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089636030"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:56,032 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 15:33:56,032 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6836cbfa8c240cce30ce00220f8fbeee, NAME => 'GrouptestMultiTableMoveA,,1689089633097.6836cbfa8c240cce30ce00220f8fbeee.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 15:33:56,032 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-11 15:33:56,032 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689089636032"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:56,035 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-11 15:33:56,037 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 15:33:56,039 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 33 msec 2023-07-11 15:33:56,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-11 15:33:56,114 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-11 15:33:56,115 INFO [Listener at localhost/45661] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-11 15:33:56,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable GrouptestMultiTableMoveB 2023-07-11 15:33:56,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 15:33:56,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-11 15:33:56,119 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089636119"}]},"ts":"1689089636119"} 2023-07-11 15:33:56,121 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-11 15:33:56,122 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-11 15:33:56,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, UNASSIGN}] 2023-07-11 15:33:56,125 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, UNASSIGN 2023-07-11 15:33:56,126 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=3745a79e3f352ef0234891fb8fbe49e5, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:56,126 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089636126"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089636126"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089636126"}]},"ts":"1689089636126"} 2023-07-11 15:33:56,128 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 3745a79e3f352ef0234891fb8fbe49e5, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:56,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-11 15:33:56,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:56,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 3745a79e3f352ef0234891fb8fbe49e5, disabling compactions & flushes 2023-07-11 15:33:56,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:56,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:56,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. after waiting 0 ms 2023-07-11 15:33:56,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:56,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:33:56,287 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5. 2023-07-11 15:33:56,287 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 3745a79e3f352ef0234891fb8fbe49e5: 2023-07-11 15:33:56,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:56,290 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=3745a79e3f352ef0234891fb8fbe49e5, regionState=CLOSED 2023-07-11 15:33:56,290 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689089636290"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089636290"}]},"ts":"1689089636290"} 2023-07-11 15:33:56,294 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-11 15:33:56,294 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 3745a79e3f352ef0234891fb8fbe49e5, server=jenkins-hbase9.apache.org,36133,1689089616857 in 164 msec 2023-07-11 15:33:56,296 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-11 15:33:56,296 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3745a79e3f352ef0234891fb8fbe49e5, UNASSIGN in 171 msec 2023-07-11 15:33:56,297 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089636297"}]},"ts":"1689089636297"} 2023-07-11 15:33:56,298 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-11 15:33:56,301 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-11 15:33:56,303 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 187 msec 2023-07-11 15:33:56,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-11 15:33:56,422 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-11 15:33:56,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete GrouptestMultiTableMoveB 2023-07-11 15:33:56,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 15:33:56,426 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 15:33:56,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1934420880' 2023-07-11 15:33:56,427 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 15:33:56,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1934420880 2023-07-11 15:33:56,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,431 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:56,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:56,433 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/recovered.edits] 2023-07-11 15:33:56,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-11 15:33:56,440 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/recovered.edits/7.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5/recovered.edits/7.seqid 2023-07-11 15:33:56,440 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/GrouptestMultiTableMoveB/3745a79e3f352ef0234891fb8fbe49e5 2023-07-11 15:33:56,441 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-11 15:33:56,443 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 15:33:56,445 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-11 15:33:56,447 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-11 15:33:56,448 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 15:33:56,448 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-11 15:33:56,448 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089636448"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:56,450 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 15:33:56,450 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3745a79e3f352ef0234891fb8fbe49e5, NAME => 'GrouptestMultiTableMoveB,,1689089633715.3745a79e3f352ef0234891fb8fbe49e5.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 15:33:56,450 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-11 15:33:56,450 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689089636450"}]},"ts":"9223372036854775807"} 2023-07-11 15:33:56,451 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-11 15:33:56,455 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 15:33:56,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 32 msec 2023-07-11 15:33:56,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-11 15:33:56,539 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-11 15:33:56,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:56,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:56,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:56,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133] to rsgroup default 2023-07-11 15:33:56,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1934420880 2023-07-11 15:33:56,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:56,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1934420880, current retry=0 2023-07-11 15:33:56,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857] are moved back to Group_testMultiTableMove_1934420880 2023-07-11 15:33:56,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1934420880 => default 2023-07-11 15:33:56,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_testMultiTableMove_1934420880 2023-07-11 15:33:56,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 15:33:56,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:56,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:56,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:56,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:56,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:56,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:56,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:56,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:56,566 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:56,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:56,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:56,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:56,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:56,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:56,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 508 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090836579, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:56,580 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:56,582 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:56,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,583 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:56,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:56,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,605 INFO [Listener at localhost/45661] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=509 (was 513), OpenFileDescriptor=808 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=524 (was 500) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 177), AvailableMemoryMB=6419 (was 6518) 2023-07-11 15:33:56,605 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-11 15:33:56,620 INFO [Listener at localhost/45661] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=509, OpenFileDescriptor=808, MaxFileDescriptor=60000, SystemLoadAverage=524, ProcessCount=176, AvailableMemoryMB=6418 2023-07-11 15:33:56,620 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-11 15:33:56,621 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-11 15:33:56,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:56,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:56,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:56,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:56,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:56,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:56,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:56,634 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:56,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:56,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:56,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:56,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:56,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:56,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 536 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090836650, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:56,651 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:56,652 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:56,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,653 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:56,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:56,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:56,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup oldGroup 2023-07-11 15:33:56,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 15:33:56,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:56,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:56,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495] to rsgroup oldGroup 2023-07-11 15:33:56,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 15:33:56,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:56,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 15:33:56,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669] are moved back to default 2023-07-11 15:33:56,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-11 15:33:56,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=oldGroup 2023-07-11 15:33:56,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=oldGroup 2023-07-11 15:33:56,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:56,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup anotherRSGroup 2023-07-11 15:33:56,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-11 15:33:56,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 15:33:56,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:33:56,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:56,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:43957] to rsgroup anotherRSGroup 2023-07-11 15:33:56,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-11 15:33:56,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 15:33:56,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:33:56,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 15:33:56,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,43957,1689089616370] are moved back to default 2023-07-11 15:33:56,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-11 15:33:56,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-11 15:33:56,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-11 15:33:56,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-11 15:33:56,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:56,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 570 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.2.10:55202 deadline: 1689090836708, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-11 15:33:56,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from oldGroup to anotherRSGroup 2023-07-11 15:33:56,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:56,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.2.10:55202 deadline: 1689090836710, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-11 15:33:56,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from default to newRSGroup2 2023-07-11 15:33:56,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:56,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.2.10:55202 deadline: 1689090836711, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-11 15:33:56,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from oldGroup to default 2023-07-11 15:33:56,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:56,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.2.10:55202 deadline: 1689090836712, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-11 15:33:56,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:56,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:56,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:56,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:43957] to rsgroup default 2023-07-11 15:33:56,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-11 15:33:56,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 15:33:56,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:33:56,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-11 15:33:56,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,43957,1689089616370] are moved back to anotherRSGroup 2023-07-11 15:33:56,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-11 15:33:56,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup anotherRSGroup 2023-07-11 15:33:56,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 15:33:56,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-11 15:33:56,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:56,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:56,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:56,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:56,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495] to rsgroup default 2023-07-11 15:33:56,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 15:33:56,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:56,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-11 15:33:56,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669] are moved back to oldGroup 2023-07-11 15:33:56,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-11 15:33:56,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup oldGroup 2023-07-11 15:33:56,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 15:33:56,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:56,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:56,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:56,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:56,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:56,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:56,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:56,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:56,770 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:56,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:56,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:56,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:56,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:56,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:56,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 612 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090836782, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:56,782 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:56,784 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:56,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,785 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:56,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:56,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,803 INFO [Listener at localhost/45661] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512 (was 509) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=793 (was 808), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=524 (was 524), ProcessCount=176 (was 176), AvailableMemoryMB=6417 (was 6418) 2023-07-11 15:33:56,803 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-11 15:33:56,818 INFO [Listener at localhost/45661] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=512, OpenFileDescriptor=793, MaxFileDescriptor=60000, SystemLoadAverage=524, ProcessCount=176, AvailableMemoryMB=6416 2023-07-11 15:33:56,818 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-11 15:33:56,818 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-11 15:33:56,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:33:56,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:33:56,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:56,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:33:56,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:33:56,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:33:56,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:33:56,831 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:33:56,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:33:56,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:33:56,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:56,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:33:56,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:33:56,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 640 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090836841, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:33:56,842 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:33:56,843 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:56,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,844 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:33:56,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:56,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:56,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup oldgroup 2023-07-11 15:33:56,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 15:33:56,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:56,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:56,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495] to rsgroup oldgroup 2023-07-11 15:33:56,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 15:33:56,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:56,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 15:33:56,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669] are moved back to default 2023-07-11 15:33:56,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-11 15:33:56,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:56,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:56,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:56,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=oldgroup 2023-07-11 15:33:56,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:56,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:56,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-11 15:33:56,872 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:33:56,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-11 15:33:56,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-11 15:33:56,874 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 15:33:56,875 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:56,875 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:56,875 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:56,878 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:33:56,880 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:56,880 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b empty. 2023-07-11 15:33:56,881 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:56,881 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-11 15:33:56,899 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:56,900 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => e1d75a5b638d7310f1fb4df8d75d5f7b, NAME => 'testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:56,914 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:56,914 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing e1d75a5b638d7310f1fb4df8d75d5f7b, disabling compactions & flushes 2023-07-11 15:33:56,914 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:56,914 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:56,914 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. after waiting 0 ms 2023-07-11 15:33:56,914 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:56,915 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:56,915 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for e1d75a5b638d7310f1fb4df8d75d5f7b: 2023-07-11 15:33:56,917 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:33:56,918 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089636918"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089636918"}]},"ts":"1689089636918"} 2023-07-11 15:33:56,920 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:33:56,920 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:33:56,921 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089636921"}]},"ts":"1689089636921"} 2023-07-11 15:33:56,924 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-11 15:33:56,928 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:56,929 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:56,929 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:56,929 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:56,929 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, ASSIGN}] 2023-07-11 15:33:56,931 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, ASSIGN 2023-07-11 15:33:56,932 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:33:56,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-11 15:33:57,082 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:33:57,083 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:57,084 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089637083"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089637083"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089637083"}]},"ts":"1689089637083"} 2023-07-11 15:33:57,085 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:57,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-11 15:33:57,240 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1d75a5b638d7310f1fb4df8d75d5f7b, NAME => 'testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:57,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:57,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,242 INFO [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,244 DEBUG [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/tr 2023-07-11 15:33:57,244 DEBUG [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/tr 2023-07-11 15:33:57,244 INFO [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1d75a5b638d7310f1fb4df8d75d5f7b columnFamilyName tr 2023-07-11 15:33:57,245 INFO [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] regionserver.HStore(310): Store=e1d75a5b638d7310f1fb4df8d75d5f7b/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:57,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:57,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e1d75a5b638d7310f1fb4df8d75d5f7b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11382955040, jitterRate=0.06012029945850372}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:57,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e1d75a5b638d7310f1fb4df8d75d5f7b: 2023-07-11 15:33:57,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b., pid=119, masterSystemTime=1689089637237 2023-07-11 15:33:57,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,253 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,253 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:57,253 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089637253"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089637253"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089637253"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089637253"}]},"ts":"1689089637253"} 2023-07-11 15:33:57,256 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-11 15:33:57,256 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,43957,1689089616370 in 169 msec 2023-07-11 15:33:57,257 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-11 15:33:57,257 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, ASSIGN in 327 msec 2023-07-11 15:33:57,257 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:33:57,258 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089637258"}]},"ts":"1689089637258"} 2023-07-11 15:33:57,259 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-11 15:33:57,261 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:33:57,262 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 392 msec 2023-07-11 15:33:57,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-11 15:33:57,476 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-11 15:33:57,477 DEBUG [Listener at localhost/45661] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-11 15:33:57,477 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:57,480 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-11 15:33:57,481 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:57,481 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-11 15:33:57,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [testRename] to rsgroup oldgroup 2023-07-11 15:33:57,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 15:33:57,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:57,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:57,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:33:57,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-11 15:33:57,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region e1d75a5b638d7310f1fb4df8d75d5f7b to RSGroup oldgroup 2023-07-11 15:33:57,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:33:57,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:33:57,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:33:57,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:33:57,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:33:57,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, REOPEN/MOVE 2023-07-11 15:33:57,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-11 15:33:57,489 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, REOPEN/MOVE 2023-07-11 15:33:57,490 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:57,490 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089637490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089637490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089637490"}]},"ts":"1689089637490"} 2023-07-11 15:33:57,492 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:57,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e1d75a5b638d7310f1fb4df8d75d5f7b, disabling compactions & flushes 2023-07-11 15:33:57,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. after waiting 0 ms 2023-07-11 15:33:57,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:57,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e1d75a5b638d7310f1fb4df8d75d5f7b: 2023-07-11 15:33:57,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding e1d75a5b638d7310f1fb4df8d75d5f7b move to jenkins-hbase9.apache.org,36133,1689089616857 record at close sequenceid=2 2023-07-11 15:33:57,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,660 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=CLOSED 2023-07-11 15:33:57,661 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089637660"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089637660"}]},"ts":"1689089637660"} 2023-07-11 15:33:57,666 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-11 15:33:57,666 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,43957,1689089616370 in 170 msec 2023-07-11 15:33:57,667 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,36133,1689089616857; forceNewPlan=false, retain=false 2023-07-11 15:33:57,817 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:33:57,818 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:57,818 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089637818"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089637818"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089637818"}]},"ts":"1689089637818"} 2023-07-11 15:33:57,819 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:33:57,975 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1d75a5b638d7310f1fb4df8d75d5f7b, NAME => 'testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:57,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:57,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,976 INFO [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,977 DEBUG [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/tr 2023-07-11 15:33:57,977 DEBUG [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/tr 2023-07-11 15:33:57,978 INFO [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1d75a5b638d7310f1fb4df8d75d5f7b columnFamilyName tr 2023-07-11 15:33:57,978 INFO [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] regionserver.HStore(310): Store=e1d75a5b638d7310f1fb4df8d75d5f7b/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:57,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:33:57,983 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e1d75a5b638d7310f1fb4df8d75d5f7b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10138127840, jitterRate=-0.055813267827034}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:57,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e1d75a5b638d7310f1fb4df8d75d5f7b: 2023-07-11 15:33:57,984 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b., pid=122, masterSystemTime=1689089637971 2023-07-11 15:33:57,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,985 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:33:57,986 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:33:57,986 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089637986"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089637986"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089637986"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089637986"}]},"ts":"1689089637986"} 2023-07-11 15:33:57,988 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-11 15:33:57,988 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,36133,1689089616857 in 168 msec 2023-07-11 15:33:57,989 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, REOPEN/MOVE in 499 msec 2023-07-11 15:33:58,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-11 15:33:58,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-11 15:33:58,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:33:58,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:58,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:58,500 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:58,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=testRename 2023-07-11 15:33:58,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:33:58,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=oldgroup 2023-07-11 15:33:58,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:58,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=testRename 2023-07-11 15:33:58,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:33:58,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:33:58,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:58,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup normal 2023-07-11 15:33:58,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 15:33:58,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 15:33:58,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:58,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:58,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:33:58,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:33:58,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:58,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:58,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:43957] to rsgroup normal 2023-07-11 15:33:58,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 15:33:58,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 15:33:58,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:58,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:58,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:33:58,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 15:33:58,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,43957,1689089616370] are moved back to default 2023-07-11 15:33:58,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-11 15:33:58,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:33:58,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:33:58,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:33:58,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=normal 2023-07-11 15:33:58,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:33:58,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:33:58,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-11 15:33:58,549 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:33:58,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-11 15:33:58,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-11 15:33:58,551 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 15:33:58,552 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 15:33:58,552 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:58,553 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:58,553 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:33:58,555 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:33:58,557 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/unmovedTable/203d287260feed5f883777745504f77e 2023-07-11 15:33:58,558 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/unmovedTable/203d287260feed5f883777745504f77e empty. 2023-07-11 15:33:58,558 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/unmovedTable/203d287260feed5f883777745504f77e 2023-07-11 15:33:58,558 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-11 15:33:58,581 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-11 15:33:58,582 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 203d287260feed5f883777745504f77e, NAME => 'unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:33:58,604 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:58,604 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 203d287260feed5f883777745504f77e, disabling compactions & flushes 2023-07-11 15:33:58,604 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:58,604 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:58,604 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. after waiting 0 ms 2023-07-11 15:33:58,604 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:58,604 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:58,604 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 203d287260feed5f883777745504f77e: 2023-07-11 15:33:58,607 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:33:58,608 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089638607"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089638607"}]},"ts":"1689089638607"} 2023-07-11 15:33:58,609 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:33:58,610 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:33:58,610 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089638610"}]},"ts":"1689089638610"} 2023-07-11 15:33:58,611 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-11 15:33:58,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, ASSIGN}] 2023-07-11 15:33:58,623 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, ASSIGN 2023-07-11 15:33:58,625 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:33:58,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-11 15:33:58,777 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:58,777 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089638777"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089638777"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089638777"}]},"ts":"1689089638777"} 2023-07-11 15:33:58,779 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:58,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-11 15:33:58,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:58,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 203d287260feed5f883777745504f77e, NAME => 'unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:58,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 203d287260feed5f883777745504f77e 2023-07-11 15:33:58,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:58,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 203d287260feed5f883777745504f77e 2023-07-11 15:33:58,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 203d287260feed5f883777745504f77e 2023-07-11 15:33:58,940 INFO [StoreOpener-203d287260feed5f883777745504f77e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 203d287260feed5f883777745504f77e 2023-07-11 15:33:58,942 DEBUG [StoreOpener-203d287260feed5f883777745504f77e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/ut 2023-07-11 15:33:58,942 DEBUG [StoreOpener-203d287260feed5f883777745504f77e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/ut 2023-07-11 15:33:58,942 INFO [StoreOpener-203d287260feed5f883777745504f77e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 203d287260feed5f883777745504f77e columnFamilyName ut 2023-07-11 15:33:58,943 INFO [StoreOpener-203d287260feed5f883777745504f77e-1] regionserver.HStore(310): Store=203d287260feed5f883777745504f77e/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:58,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e 2023-07-11 15:33:58,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e 2023-07-11 15:33:58,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 203d287260feed5f883777745504f77e 2023-07-11 15:33:58,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:33:58,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 203d287260feed5f883777745504f77e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10258011200, jitterRate=-0.04464825987815857}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:58,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 203d287260feed5f883777745504f77e: 2023-07-11 15:33:58,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689089638545.203d287260feed5f883777745504f77e., pid=125, masterSystemTime=1689089638931 2023-07-11 15:33:58,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:58,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:58,954 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:58,954 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089638954"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089638954"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089638954"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089638954"}]},"ts":"1689089638954"} 2023-07-11 15:33:58,959 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-11 15:33:58,959 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,45349,1689089620952 in 179 msec 2023-07-11 15:33:58,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-11 15:33:58,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, ASSIGN in 338 msec 2023-07-11 15:33:58,961 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:33:58,962 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089638961"}]},"ts":"1689089638961"} 2023-07-11 15:33:58,963 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-11 15:33:58,965 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:33:58,967 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 420 msec 2023-07-11 15:33:59,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-11 15:33:59,154 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-11 15:33:59,155 DEBUG [Listener at localhost/45661] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-11 15:33:59,155 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:59,160 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-11 15:33:59,160 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:33:59,161 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-11 15:33:59,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [unmovedTable] to rsgroup normal 2023-07-11 15:33:59,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 15:33:59,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 15:33:59,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:33:59,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:33:59,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:33:59,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-11 15:33:59,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region 203d287260feed5f883777745504f77e to RSGroup normal 2023-07-11 15:33:59,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, REOPEN/MOVE 2023-07-11 15:33:59,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-11 15:33:59,169 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, REOPEN/MOVE 2023-07-11 15:33:59,170 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:33:59,170 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089639170"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089639170"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089639170"}]},"ts":"1689089639170"} 2023-07-11 15:33:59,171 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:33:59,317 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-11 15:33:59,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 203d287260feed5f883777745504f77e 2023-07-11 15:33:59,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 203d287260feed5f883777745504f77e, disabling compactions & flushes 2023-07-11 15:33:59,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:59,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:59,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. after waiting 0 ms 2023-07-11 15:33:59,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:59,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:33:59,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:59,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 203d287260feed5f883777745504f77e: 2023-07-11 15:33:59,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 203d287260feed5f883777745504f77e move to jenkins-hbase9.apache.org,43957,1689089616370 record at close sequenceid=2 2023-07-11 15:33:59,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 203d287260feed5f883777745504f77e 2023-07-11 15:33:59,331 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=CLOSED 2023-07-11 15:33:59,332 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089639331"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089639331"}]},"ts":"1689089639331"} 2023-07-11 15:33:59,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-11 15:33:59,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,45349,1689089620952 in 162 msec 2023-07-11 15:33:59,335 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:33:59,486 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:59,486 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089639486"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089639486"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089639486"}]},"ts":"1689089639486"} 2023-07-11 15:33:59,488 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:33:59,646 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:59,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 203d287260feed5f883777745504f77e, NAME => 'unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:33:59,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 203d287260feed5f883777745504f77e 2023-07-11 15:33:59,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:33:59,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 203d287260feed5f883777745504f77e 2023-07-11 15:33:59,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 203d287260feed5f883777745504f77e 2023-07-11 15:33:59,648 INFO [StoreOpener-203d287260feed5f883777745504f77e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 203d287260feed5f883777745504f77e 2023-07-11 15:33:59,649 DEBUG [StoreOpener-203d287260feed5f883777745504f77e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/ut 2023-07-11 15:33:59,649 DEBUG [StoreOpener-203d287260feed5f883777745504f77e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/ut 2023-07-11 15:33:59,650 INFO [StoreOpener-203d287260feed5f883777745504f77e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 203d287260feed5f883777745504f77e columnFamilyName ut 2023-07-11 15:33:59,650 INFO [StoreOpener-203d287260feed5f883777745504f77e-1] regionserver.HStore(310): Store=203d287260feed5f883777745504f77e/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:33:59,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e 2023-07-11 15:33:59,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e 2023-07-11 15:33:59,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 203d287260feed5f883777745504f77e 2023-07-11 15:33:59,656 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 203d287260feed5f883777745504f77e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11722742080, jitterRate=0.09176543354988098}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:33:59,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 203d287260feed5f883777745504f77e: 2023-07-11 15:33:59,658 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689089638545.203d287260feed5f883777745504f77e., pid=128, masterSystemTime=1689089639642 2023-07-11 15:33:59,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:59,666 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:33:59,666 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:33:59,666 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089639666"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089639666"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089639666"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089639666"}]},"ts":"1689089639666"} 2023-07-11 15:33:59,669 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-11 15:33:59,669 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,43957,1689089616370 in 180 msec 2023-07-11 15:33:59,670 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, REOPEN/MOVE in 501 msec 2023-07-11 15:34:00,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-11 15:34:00,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-11 15:34:00,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:00,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:00,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:00,175 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:00,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=unmovedTable 2023-07-11 15:34:00,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:34:00,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=normal 2023-07-11 15:34:00,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:00,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=unmovedTable 2023-07-11 15:34:00,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:34:00,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.2.10 rename rsgroup from oldgroup to newgroup 2023-07-11 15:34:00,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 15:34:00,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:00,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:00,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 15:34:00,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-11 15:34:00,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RenameRSGroup 2023-07-11 15:34:00,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:00,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:00,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=newgroup 2023-07-11 15:34:00,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:00,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=testRename 2023-07-11 15:34:00,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:34:00,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=unmovedTable 2023-07-11 15:34:00,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:34:00,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:00,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:00,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [unmovedTable] to rsgroup default 2023-07-11 15:34:00,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 15:34:00,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:00,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:00,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 15:34:00,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:34:00,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-11 15:34:00,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region 203d287260feed5f883777745504f77e to RSGroup default 2023-07-11 15:34:00,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, REOPEN/MOVE 2023-07-11 15:34:00,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-11 15:34:00,207 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, REOPEN/MOVE 2023-07-11 15:34:00,207 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:00,207 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089640207"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089640207"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089640207"}]},"ts":"1689089640207"} 2023-07-11 15:34:00,208 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:34:00,360 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 203d287260feed5f883777745504f77e 2023-07-11 15:34:00,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 203d287260feed5f883777745504f77e, disabling compactions & flushes 2023-07-11 15:34:00,361 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:00,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:00,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. after waiting 0 ms 2023-07-11 15:34:00,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:00,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:34:00,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:00,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 203d287260feed5f883777745504f77e: 2023-07-11 15:34:00,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding 203d287260feed5f883777745504f77e move to jenkins-hbase9.apache.org,45349,1689089620952 record at close sequenceid=5 2023-07-11 15:34:00,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 203d287260feed5f883777745504f77e 2023-07-11 15:34:00,368 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=CLOSED 2023-07-11 15:34:00,368 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089640367"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089640367"}]},"ts":"1689089640367"} 2023-07-11 15:34:00,370 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-11 15:34:00,370 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,43957,1689089616370 in 161 msec 2023-07-11 15:34:00,371 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:34:00,521 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:00,522 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089640521"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089640521"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089640521"}]},"ts":"1689089640521"} 2023-07-11 15:34:00,523 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:34:00,665 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-11 15:34:00,678 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:00,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 203d287260feed5f883777745504f77e, NAME => 'unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:00,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 203d287260feed5f883777745504f77e 2023-07-11 15:34:00,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:00,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 203d287260feed5f883777745504f77e 2023-07-11 15:34:00,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 203d287260feed5f883777745504f77e 2023-07-11 15:34:00,680 INFO [StoreOpener-203d287260feed5f883777745504f77e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 203d287260feed5f883777745504f77e 2023-07-11 15:34:00,681 DEBUG [StoreOpener-203d287260feed5f883777745504f77e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/ut 2023-07-11 15:34:00,682 DEBUG [StoreOpener-203d287260feed5f883777745504f77e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/ut 2023-07-11 15:34:00,682 INFO [StoreOpener-203d287260feed5f883777745504f77e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 203d287260feed5f883777745504f77e columnFamilyName ut 2023-07-11 15:34:00,683 INFO [StoreOpener-203d287260feed5f883777745504f77e-1] regionserver.HStore(310): Store=203d287260feed5f883777745504f77e/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:00,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e 2023-07-11 15:34:00,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e 2023-07-11 15:34:00,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 203d287260feed5f883777745504f77e 2023-07-11 15:34:00,693 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 203d287260feed5f883777745504f77e; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9595850560, jitterRate=-0.10631677508354187}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:00,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 203d287260feed5f883777745504f77e: 2023-07-11 15:34:00,694 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689089638545.203d287260feed5f883777745504f77e., pid=131, masterSystemTime=1689089640674 2023-07-11 15:34:00,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:00,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:00,696 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=203d287260feed5f883777745504f77e, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:00,696 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689089638545.203d287260feed5f883777745504f77e.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689089640696"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089640696"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089640696"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089640696"}]},"ts":"1689089640696"} 2023-07-11 15:34:00,699 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-11 15:34:00,699 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 203d287260feed5f883777745504f77e, server=jenkins-hbase9.apache.org,45349,1689089620952 in 175 msec 2023-07-11 15:34:00,700 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=203d287260feed5f883777745504f77e, REOPEN/MOVE in 493 msec 2023-07-11 15:34:01,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-11 15:34:01,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-11 15:34:01,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:01,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:43957] to rsgroup default 2023-07-11 15:34:01,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 15:34:01,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:01,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:01,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 15:34:01,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:34:01,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-11 15:34:01,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,43957,1689089616370] are moved back to normal 2023-07-11 15:34:01,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-11 15:34:01,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:01,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup normal 2023-07-11 15:34:01,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:01,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:01,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 15:34:01,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-11 15:34:01,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:01,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:01,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:01,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:01,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:01,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:01,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:01,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:01,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 15:34:01,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 15:34:01,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:01,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [testRename] to rsgroup default 2023-07-11 15:34:01,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:01,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 15:34:01,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:01,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-11 15:34:01,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(345): Moving region e1d75a5b638d7310f1fb4df8d75d5f7b to RSGroup default 2023-07-11 15:34:01,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, REOPEN/MOVE 2023-07-11 15:34:01,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-11 15:34:01,237 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, REOPEN/MOVE 2023-07-11 15:34:01,238 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:34:01,239 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089641238"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089641238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089641238"}]},"ts":"1689089641238"} 2023-07-11 15:34:01,240 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,36133,1689089616857}] 2023-07-11 15:34:01,392 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:01,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e1d75a5b638d7310f1fb4df8d75d5f7b, disabling compactions & flushes 2023-07-11 15:34:01,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:01,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:01,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. after waiting 0 ms 2023-07-11 15:34:01,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:01,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 15:34:01,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:01,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e1d75a5b638d7310f1fb4df8d75d5f7b: 2023-07-11 15:34:01,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(3513): Adding e1d75a5b638d7310f1fb4df8d75d5f7b move to jenkins-hbase9.apache.org,43957,1689089616370 record at close sequenceid=5 2023-07-11 15:34:01,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:01,400 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=CLOSED 2023-07-11 15:34:01,400 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089641400"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089641400"}]},"ts":"1689089641400"} 2023-07-11 15:34:01,403 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-11 15:34:01,403 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,36133,1689089616857 in 162 msec 2023-07-11 15:34:01,403 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:34:01,554 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:34:01,554 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:01,554 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089641554"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089641554"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089641554"}]},"ts":"1689089641554"} 2023-07-11 15:34:01,556 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:34:01,711 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:01,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1d75a5b638d7310f1fb4df8d75d5f7b, NAME => 'testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:01,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:01,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:01,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:01,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:01,713 INFO [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:01,713 DEBUG [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/tr 2023-07-11 15:34:01,714 DEBUG [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/tr 2023-07-11 15:34:01,714 INFO [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1d75a5b638d7310f1fb4df8d75d5f7b columnFamilyName tr 2023-07-11 15:34:01,714 INFO [StoreOpener-e1d75a5b638d7310f1fb4df8d75d5f7b-1] regionserver.HStore(310): Store=e1d75a5b638d7310f1fb4df8d75d5f7b/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:01,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:01,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:01,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:01,719 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e1d75a5b638d7310f1fb4df8d75d5f7b; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11713888480, jitterRate=0.09094087779521942}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:01,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e1d75a5b638d7310f1fb4df8d75d5f7b: 2023-07-11 15:34:01,720 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b., pid=134, masterSystemTime=1689089641707 2023-07-11 15:34:01,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:01,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:01,722 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=e1d75a5b638d7310f1fb4df8d75d5f7b, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:01,722 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689089641721"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089641721"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089641721"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089641721"}]},"ts":"1689089641721"} 2023-07-11 15:34:01,724 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-11 15:34:01,724 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure e1d75a5b638d7310f1fb4df8d75d5f7b, server=jenkins-hbase9.apache.org,43957,1689089616370 in 167 msec 2023-07-11 15:34:01,725 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=e1d75a5b638d7310f1fb4df8d75d5f7b, REOPEN/MOVE in 487 msec 2023-07-11 15:34:02,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-11 15:34:02,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-11 15:34:02,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:02,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495] to rsgroup default 2023-07-11 15:34:02,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 15:34:02,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:02,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-11 15:34:02,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669] are moved back to newgroup 2023-07-11 15:34:02,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-11 15:34:02,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:02,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup newgroup 2023-07-11 15:34:02,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:02,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:02,253 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:02,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:02,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:02,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:02,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:02,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:34:02,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:02,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 760 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090842269, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:34:02,270 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:02,271 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:02,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,272 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:02,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:02,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:02,297 INFO [Listener at localhost/45661] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=506 (was 512), OpenFileDescriptor=773 (was 793), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=490 (was 524), ProcessCount=176 (was 176), AvailableMemoryMB=6296 (was 6416) 2023-07-11 15:34:02,297 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-11 15:34:02,321 INFO [Listener at localhost/45661] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=506, OpenFileDescriptor=773, MaxFileDescriptor=60000, SystemLoadAverage=490, ProcessCount=177, AvailableMemoryMB=6295 2023-07-11 15:34:02,321 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-11 15:34:02,321 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-11 15:34:02,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:02,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:02,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:02,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:02,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:02,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:02,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:02,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:02,337 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:02,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:02,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:02,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:02,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:02,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:34:02,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:02,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 788 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090842355, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:34:02,356 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:02,358 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:02,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,359 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:02,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:02,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:02,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=nonexistent 2023-07-11 15:34:02,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:34:02,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, server=bogus:123 2023-07-11 15:34:02,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-11 15:34:02,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=bogus 2023-07-11 15:34:02,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:02,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup bogus 2023-07-11 15:34:02,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:02,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 800 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.2.10:55202 deadline: 1689090842369, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-11 15:34:02,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [bogus:123] to rsgroup bogus 2023-07-11 15:34:02,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:02,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.2.10:55202 deadline: 1689090842372, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-11 15:34:02,375 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-11 15:34:02,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=true 2023-07-11 15:34:02,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.2.10 balance rsgroup, group=bogus 2023-07-11 15:34:02,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:02,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.2.10:55202 deadline: 1689090842380, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-11 15:34:02,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:02,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:02,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:02,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:02,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:02,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:02,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:02,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:02,401 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:02,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:02,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:02,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:02,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:02,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:34:02,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:02,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 831 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090842414, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:34:02,417 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:02,419 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:02,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,420 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:02,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:02,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:02,437 INFO [Listener at localhost/45661] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=510 (was 506) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a6672ab-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=773 (was 773), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=490 (was 490), ProcessCount=176 (was 177), AvailableMemoryMB=6296 (was 6295) - AvailableMemoryMB LEAK? - 2023-07-11 15:34:02,437 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-11 15:34:02,453 INFO [Listener at localhost/45661] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=510, OpenFileDescriptor=773, MaxFileDescriptor=60000, SystemLoadAverage=490, ProcessCount=176, AvailableMemoryMB=6295 2023-07-11 15:34:02,453 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-11 15:34:02,453 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-11 15:34:02,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:02,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:02,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:02,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:02,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:02,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:02,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:02,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:02,466 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:02,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:02,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:02,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:02,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:02,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:34:02,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:02,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 859 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090842477, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:34:02,477 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:02,479 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:02,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,480 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:02,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:02,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:02,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:02,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:02,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_testDisabledTableMove_999698547 2023-07-11 15:34:02,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_999698547 2023-07-11 15:34:02,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:02,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:34:02,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:02,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495] to rsgroup Group_testDisabledTableMove_999698547 2023-07-11 15:34:02,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_999698547 2023-07-11 15:34:02,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:02,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:34:02,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 15:34:02,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669] are moved back to default 2023-07-11 15:34:02,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_999698547 2023-07-11 15:34:02,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:02,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:02,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:02,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_999698547 2023-07-11 15:34:02,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:02,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:02,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-11 15:34:02,508 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:02,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-11 15:34:02,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-11 15:34:02,509 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:02,510 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_999698547 2023-07-11 15:34:02,510 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:02,510 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:34:02,512 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:34:02,516 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:02,516 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:02,516 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:02,516 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:02,516 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:02,517 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe empty. 2023-07-11 15:34:02,517 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea empty. 2023-07-11 15:34:02,517 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb empty. 2023-07-11 15:34:02,517 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f empty. 2023-07-11 15:34:02,517 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4 empty. 2023-07-11 15:34:02,518 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:02,518 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:02,518 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:02,518 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:02,518 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:02,518 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-11 15:34:02,534 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:02,539 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 71e7c6fb7a2bb063cbd9512ed6aea44f, NAME => 'Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:34:02,540 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 77000279dde6f7050ab496c7d8a1c2bb, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:34:02,540 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => c00838e1f90ddca3fb61abb8c34d98ea, NAME => 'Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:34:02,569 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,569 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing c00838e1f90ddca3fb61abb8c34d98ea, disabling compactions & flushes 2023-07-11 15:34:02,569 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:02,569 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:02,569 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. after waiting 0 ms 2023-07-11 15:34:02,570 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:02,570 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:02,570 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for c00838e1f90ddca3fb61abb8c34d98ea: 2023-07-11 15:34:02,570 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => fd7708efb045c0f24fa78e8bb6bb52fe, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:34:02,571 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,571 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 71e7c6fb7a2bb063cbd9512ed6aea44f, disabling compactions & flushes 2023-07-11 15:34:02,571 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:02,571 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:02,571 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. after waiting 0 ms 2023-07-11 15:34:02,571 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:02,571 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,571 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:02,571 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 77000279dde6f7050ab496c7d8a1c2bb, disabling compactions & flushes 2023-07-11 15:34:02,572 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 71e7c6fb7a2bb063cbd9512ed6aea44f: 2023-07-11 15:34:02,572 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:02,572 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:02,572 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => cd6e2f60b0da8f8c5799b62d7706eaa4, NAME => 'Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp 2023-07-11 15:34:02,572 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. after waiting 0 ms 2023-07-11 15:34:02,572 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:02,572 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:02,572 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 77000279dde6f7050ab496c7d8a1c2bb: 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing fd7708efb045c0f24fa78e8bb6bb52fe, disabling compactions & flushes 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,585 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing cd6e2f60b0da8f8c5799b62d7706eaa4, disabling compactions & flushes 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:02,585 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. after waiting 0 ms 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:02,585 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. after waiting 0 ms 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for fd7708efb045c0f24fa78e8bb6bb52fe: 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:02,585 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:02,585 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for cd6e2f60b0da8f8c5799b62d7706eaa4: 2023-07-11 15:34:02,587 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:34:02,588 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089642588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089642588"}]},"ts":"1689089642588"} 2023-07-11 15:34:02,589 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089642588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089642588"}]},"ts":"1689089642588"} 2023-07-11 15:34:02,589 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089642588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089642588"}]},"ts":"1689089642588"} 2023-07-11 15:34:02,589 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089642588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089642588"}]},"ts":"1689089642588"} 2023-07-11 15:34:02,589 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089642588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089642588"}]},"ts":"1689089642588"} 2023-07-11 15:34:02,591 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-11 15:34:02,591 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:34:02,592 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089642592"}]},"ts":"1689089642592"} 2023-07-11 15:34:02,593 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-11 15:34:02,597 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:02,597 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:02,597 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:02,597 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:02,597 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=71e7c6fb7a2bb063cbd9512ed6aea44f, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c00838e1f90ddca3fb61abb8c34d98ea, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=77000279dde6f7050ab496c7d8a1c2bb, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd7708efb045c0f24fa78e8bb6bb52fe, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cd6e2f60b0da8f8c5799b62d7706eaa4, ASSIGN}] 2023-07-11 15:34:02,600 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cd6e2f60b0da8f8c5799b62d7706eaa4, ASSIGN 2023-07-11 15:34:02,601 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=77000279dde6f7050ab496c7d8a1c2bb, ASSIGN 2023-07-11 15:34:02,601 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd7708efb045c0f24fa78e8bb6bb52fe, ASSIGN 2023-07-11 15:34:02,601 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c00838e1f90ddca3fb61abb8c34d98ea, ASSIGN 2023-07-11 15:34:02,601 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=71e7c6fb7a2bb063cbd9512ed6aea44f, ASSIGN 2023-07-11 15:34:02,602 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cd6e2f60b0da8f8c5799b62d7706eaa4, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:34:02,602 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=77000279dde6f7050ab496c7d8a1c2bb, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:34:02,602 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd7708efb045c0f24fa78e8bb6bb52fe, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:34:02,602 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c00838e1f90ddca3fb61abb8c34d98ea, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,43957,1689089616370; forceNewPlan=false, retain=false 2023-07-11 15:34:02,603 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=71e7c6fb7a2bb063cbd9512ed6aea44f, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,45349,1689089620952; forceNewPlan=false, retain=false 2023-07-11 15:34:02,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-11 15:34:02,752 INFO [jenkins-hbase9:44179] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-11 15:34:02,756 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=71e7c6fb7a2bb063cbd9512ed6aea44f, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:02,756 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=cd6e2f60b0da8f8c5799b62d7706eaa4, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:02,756 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=c00838e1f90ddca3fb61abb8c34d98ea, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:02,756 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=fd7708efb045c0f24fa78e8bb6bb52fe, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:02,756 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089642756"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089642756"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089642756"}]},"ts":"1689089642756"} 2023-07-11 15:34:02,756 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=77000279dde6f7050ab496c7d8a1c2bb, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:02,756 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089642756"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089642756"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089642756"}]},"ts":"1689089642756"} 2023-07-11 15:34:02,756 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089642756"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089642756"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089642756"}]},"ts":"1689089642756"} 2023-07-11 15:34:02,756 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089642756"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089642756"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089642756"}]},"ts":"1689089642756"} 2023-07-11 15:34:02,756 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089642756"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089642756"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089642756"}]},"ts":"1689089642756"} 2023-07-11 15:34:02,758 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=137, state=RUNNABLE; OpenRegionProcedure c00838e1f90ddca3fb61abb8c34d98ea, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:34:02,759 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=139, state=RUNNABLE; OpenRegionProcedure fd7708efb045c0f24fa78e8bb6bb52fe, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:34:02,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=138, state=RUNNABLE; OpenRegionProcedure 77000279dde6f7050ab496c7d8a1c2bb, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:34:02,761 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=140, state=RUNNABLE; OpenRegionProcedure cd6e2f60b0da8f8c5799b62d7706eaa4, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:34:02,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=136, state=RUNNABLE; OpenRegionProcedure 71e7c6fb7a2bb063cbd9512ed6aea44f, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:34:02,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-11 15:34:02,914 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:02,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c00838e1f90ddca3fb61abb8c34d98ea, NAME => 'Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-11 15:34:02,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:02,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:02,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:02,916 INFO [StoreOpener-c00838e1f90ddca3fb61abb8c34d98ea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:02,916 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:02,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 71e7c6fb7a2bb063cbd9512ed6aea44f, NAME => 'Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-11 15:34:02,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:02,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:02,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:02,917 DEBUG [StoreOpener-c00838e1f90ddca3fb61abb8c34d98ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea/f 2023-07-11 15:34:02,918 DEBUG [StoreOpener-c00838e1f90ddca3fb61abb8c34d98ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea/f 2023-07-11 15:34:02,918 INFO [StoreOpener-71e7c6fb7a2bb063cbd9512ed6aea44f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:02,918 INFO [StoreOpener-c00838e1f90ddca3fb61abb8c34d98ea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c00838e1f90ddca3fb61abb8c34d98ea columnFamilyName f 2023-07-11 15:34:02,919 INFO [StoreOpener-c00838e1f90ddca3fb61abb8c34d98ea-1] regionserver.HStore(310): Store=c00838e1f90ddca3fb61abb8c34d98ea/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:02,919 DEBUG [StoreOpener-71e7c6fb7a2bb063cbd9512ed6aea44f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f/f 2023-07-11 15:34:02,919 DEBUG [StoreOpener-71e7c6fb7a2bb063cbd9512ed6aea44f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f/f 2023-07-11 15:34:02,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:02,920 INFO [StoreOpener-71e7c6fb7a2bb063cbd9512ed6aea44f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 71e7c6fb7a2bb063cbd9512ed6aea44f columnFamilyName f 2023-07-11 15:34:02,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:02,920 INFO [StoreOpener-71e7c6fb7a2bb063cbd9512ed6aea44f-1] regionserver.HStore(310): Store=71e7c6fb7a2bb063cbd9512ed6aea44f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:02,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:02,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:02,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:02,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:02,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:02,927 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened c00838e1f90ddca3fb61abb8c34d98ea; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10613184160, jitterRate=-0.011570200324058533}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:02,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for c00838e1f90ddca3fb61abb8c34d98ea: 2023-07-11 15:34:02,928 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea., pid=141, masterSystemTime=1689089642909 2023-07-11 15:34:02,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:02,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 71e7c6fb7a2bb063cbd9512ed6aea44f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11554105120, jitterRate=0.07605989277362823}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:02,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 71e7c6fb7a2bb063cbd9512ed6aea44f: 2023-07-11 15:34:02,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:02,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:02,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:02,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f., pid=145, masterSystemTime=1689089642911 2023-07-11 15:34:02,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 77000279dde6f7050ab496c7d8a1c2bb, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-11 15:34:02,930 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=c00838e1f90ddca3fb61abb8c34d98ea, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:02,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:02,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:02,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:02,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:02,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fd7708efb045c0f24fa78e8bb6bb52fe, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-11 15:34:02,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,932 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=71e7c6fb7a2bb063cbd9512ed6aea44f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:02,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:02,932 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089642932"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089642932"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089642932"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089642932"}]},"ts":"1689089642932"} 2023-07-11 15:34:02,930 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089642930"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089642930"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089642930"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089642930"}]},"ts":"1689089642930"} 2023-07-11 15:34:02,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:02,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:02,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:02,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:02,935 INFO [StoreOpener-fd7708efb045c0f24fa78e8bb6bb52fe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:02,936 INFO [StoreOpener-77000279dde6f7050ab496c7d8a1c2bb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:02,938 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=136 2023-07-11 15:34:02,938 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=137 2023-07-11 15:34:02,938 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=136, state=SUCCESS; OpenRegionProcedure 71e7c6fb7a2bb063cbd9512ed6aea44f, server=jenkins-hbase9.apache.org,45349,1689089620952 in 172 msec 2023-07-11 15:34:02,938 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; OpenRegionProcedure c00838e1f90ddca3fb61abb8c34d98ea, server=jenkins-hbase9.apache.org,43957,1689089616370 in 177 msec 2023-07-11 15:34:02,939 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=71e7c6fb7a2bb063cbd9512ed6aea44f, ASSIGN in 341 msec 2023-07-11 15:34:02,939 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c00838e1f90ddca3fb61abb8c34d98ea, ASSIGN in 341 msec 2023-07-11 15:34:02,939 DEBUG [StoreOpener-fd7708efb045c0f24fa78e8bb6bb52fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe/f 2023-07-11 15:34:02,939 DEBUG [StoreOpener-fd7708efb045c0f24fa78e8bb6bb52fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe/f 2023-07-11 15:34:02,939 INFO [StoreOpener-fd7708efb045c0f24fa78e8bb6bb52fe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fd7708efb045c0f24fa78e8bb6bb52fe columnFamilyName f 2023-07-11 15:34:02,939 DEBUG [StoreOpener-77000279dde6f7050ab496c7d8a1c2bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb/f 2023-07-11 15:34:02,940 DEBUG [StoreOpener-77000279dde6f7050ab496c7d8a1c2bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb/f 2023-07-11 15:34:02,940 INFO [StoreOpener-77000279dde6f7050ab496c7d8a1c2bb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77000279dde6f7050ab496c7d8a1c2bb columnFamilyName f 2023-07-11 15:34:02,940 INFO [StoreOpener-fd7708efb045c0f24fa78e8bb6bb52fe-1] regionserver.HStore(310): Store=fd7708efb045c0f24fa78e8bb6bb52fe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:02,940 INFO [StoreOpener-77000279dde6f7050ab496c7d8a1c2bb-1] regionserver.HStore(310): Store=77000279dde6f7050ab496c7d8a1c2bb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:02,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:02,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:02,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:02,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:02,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:02,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:02,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:02,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:02,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened fd7708efb045c0f24fa78e8bb6bb52fe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11216504000, jitterRate=0.04461833834648132}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:02,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for fd7708efb045c0f24fa78e8bb6bb52fe: 2023-07-11 15:34:02,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 77000279dde6f7050ab496c7d8a1c2bb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9696693600, jitterRate=-0.0969250351190567}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:02,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 77000279dde6f7050ab496c7d8a1c2bb: 2023-07-11 15:34:02,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe., pid=142, masterSystemTime=1689089642911 2023-07-11 15:34:02,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb., pid=143, masterSystemTime=1689089642909 2023-07-11 15:34:02,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:02,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:02,950 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=fd7708efb045c0f24fa78e8bb6bb52fe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:02,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:02,950 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089642950"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089642950"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089642950"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089642950"}]},"ts":"1689089642950"} 2023-07-11 15:34:02,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:02,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:02,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd6e2f60b0da8f8c5799b62d7706eaa4, NAME => 'Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-11 15:34:02,951 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=77000279dde6f7050ab496c7d8a1c2bb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:02,951 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089642951"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089642951"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089642951"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089642951"}]},"ts":"1689089642951"} 2023-07-11 15:34:02,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:02,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:02,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:02,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:02,952 INFO [StoreOpener-cd6e2f60b0da8f8c5799b62d7706eaa4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:02,953 DEBUG [StoreOpener-cd6e2f60b0da8f8c5799b62d7706eaa4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4/f 2023-07-11 15:34:02,953 DEBUG [StoreOpener-cd6e2f60b0da8f8c5799b62d7706eaa4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4/f 2023-07-11 15:34:02,954 INFO [StoreOpener-cd6e2f60b0da8f8c5799b62d7706eaa4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd6e2f60b0da8f8c5799b62d7706eaa4 columnFamilyName f 2023-07-11 15:34:02,955 INFO [StoreOpener-cd6e2f60b0da8f8c5799b62d7706eaa4-1] regionserver.HStore(310): Store=cd6e2f60b0da8f8c5799b62d7706eaa4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:02,955 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=139 2023-07-11 15:34:02,955 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=139, state=SUCCESS; OpenRegionProcedure fd7708efb045c0f24fa78e8bb6bb52fe, server=jenkins-hbase9.apache.org,45349,1689089620952 in 192 msec 2023-07-11 15:34:02,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:02,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:02,956 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd7708efb045c0f24fa78e8bb6bb52fe, ASSIGN in 358 msec 2023-07-11 15:34:02,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:02,959 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=138 2023-07-11 15:34:02,959 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; OpenRegionProcedure 77000279dde6f7050ab496c7d8a1c2bb, server=jenkins-hbase9.apache.org,43957,1689089616370 in 198 msec 2023-07-11 15:34:02,960 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=77000279dde6f7050ab496c7d8a1c2bb, ASSIGN in 362 msec 2023-07-11 15:34:02,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:02,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened cd6e2f60b0da8f8c5799b62d7706eaa4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9479548640, jitterRate=-0.11714823544025421}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:02,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for cd6e2f60b0da8f8c5799b62d7706eaa4: 2023-07-11 15:34:02,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4., pid=144, masterSystemTime=1689089642909 2023-07-11 15:34:02,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:02,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:02,963 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=cd6e2f60b0da8f8c5799b62d7706eaa4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:02,963 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089642963"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089642963"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089642963"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089642963"}]},"ts":"1689089642963"} 2023-07-11 15:34:02,966 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=140 2023-07-11 15:34:02,966 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; OpenRegionProcedure cd6e2f60b0da8f8c5799b62d7706eaa4, server=jenkins-hbase9.apache.org,43957,1689089616370 in 203 msec 2023-07-11 15:34:02,967 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-11 15:34:02,967 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cd6e2f60b0da8f8c5799b62d7706eaa4, ASSIGN in 369 msec 2023-07-11 15:34:02,968 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:34:02,968 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089642968"}]},"ts":"1689089642968"} 2023-07-11 15:34:02,969 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-11 15:34:02,971 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:34:02,972 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 466 msec 2023-07-11 15:34:03,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-11 15:34:03,112 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-11 15:34:03,112 DEBUG [Listener at localhost/45661] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-11 15:34:03,112 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:03,116 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-11 15:34:03,117 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:03,117 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-11 15:34:03,117 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:03,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-11 15:34:03,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:34:03,123 INFO [Listener at localhost/45661] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-11 15:34:03,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testDisabledTableMove 2023-07-11 15:34:03,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-11 15:34:03,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-11 15:34:03,128 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089643127"}]},"ts":"1689089643127"} 2023-07-11 15:34:03,129 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-11 15:34:03,130 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-11 15:34:03,131 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=71e7c6fb7a2bb063cbd9512ed6aea44f, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c00838e1f90ddca3fb61abb8c34d98ea, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=77000279dde6f7050ab496c7d8a1c2bb, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd7708efb045c0f24fa78e8bb6bb52fe, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cd6e2f60b0da8f8c5799b62d7706eaa4, UNASSIGN}] 2023-07-11 15:34:03,132 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd7708efb045c0f24fa78e8bb6bb52fe, UNASSIGN 2023-07-11 15:34:03,132 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c00838e1f90ddca3fb61abb8c34d98ea, UNASSIGN 2023-07-11 15:34:03,132 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=71e7c6fb7a2bb063cbd9512ed6aea44f, UNASSIGN 2023-07-11 15:34:03,133 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=77000279dde6f7050ab496c7d8a1c2bb, UNASSIGN 2023-07-11 15:34:03,133 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cd6e2f60b0da8f8c5799b62d7706eaa4, UNASSIGN 2023-07-11 15:34:03,133 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=fd7708efb045c0f24fa78e8bb6bb52fe, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:03,133 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=c00838e1f90ddca3fb61abb8c34d98ea, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:03,133 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089643133"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089643133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089643133"}]},"ts":"1689089643133"} 2023-07-11 15:34:03,133 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=77000279dde6f7050ab496c7d8a1c2bb, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:03,133 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=71e7c6fb7a2bb063cbd9512ed6aea44f, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:03,133 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089643133"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089643133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089643133"}]},"ts":"1689089643133"} 2023-07-11 15:34:03,133 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=cd6e2f60b0da8f8c5799b62d7706eaa4, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:03,133 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089643133"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089643133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089643133"}]},"ts":"1689089643133"} 2023-07-11 15:34:03,134 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089643133"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089643133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089643133"}]},"ts":"1689089643133"} 2023-07-11 15:34:03,134 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089643133"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089643133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089643133"}]},"ts":"1689089643133"} 2023-07-11 15:34:03,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=150, state=RUNNABLE; CloseRegionProcedure fd7708efb045c0f24fa78e8bb6bb52fe, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:34:03,135 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=149, state=RUNNABLE; CloseRegionProcedure 77000279dde6f7050ab496c7d8a1c2bb, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:34:03,136 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=148, state=RUNNABLE; CloseRegionProcedure c00838e1f90ddca3fb61abb8c34d98ea, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:34:03,137 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=151, state=RUNNABLE; CloseRegionProcedure cd6e2f60b0da8f8c5799b62d7706eaa4, server=jenkins-hbase9.apache.org,43957,1689089616370}] 2023-07-11 15:34:03,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=147, state=RUNNABLE; CloseRegionProcedure 71e7c6fb7a2bb063cbd9512ed6aea44f, server=jenkins-hbase9.apache.org,45349,1689089620952}] 2023-07-11 15:34:03,141 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-11 15:34:03,142 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testDisabledTableMove' 2023-07-11 15:34:03,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-11 15:34:03,287 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:03,287 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:03,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 71e7c6fb7a2bb063cbd9512ed6aea44f, disabling compactions & flushes 2023-07-11 15:34:03,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing cd6e2f60b0da8f8c5799b62d7706eaa4, disabling compactions & flushes 2023-07-11 15:34:03,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:03,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:03,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:03,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:03,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. after waiting 0 ms 2023-07-11 15:34:03,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:03,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. after waiting 0 ms 2023-07-11 15:34:03,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:03,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:34:03,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:34:03,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4. 2023-07-11 15:34:03,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for cd6e2f60b0da8f8c5799b62d7706eaa4: 2023-07-11 15:34:03,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f. 2023-07-11 15:34:03,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 71e7c6fb7a2bb063cbd9512ed6aea44f: 2023-07-11 15:34:03,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:03,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:03,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 77000279dde6f7050ab496c7d8a1c2bb, disabling compactions & flushes 2023-07-11 15:34:03,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:03,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:03,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. after waiting 0 ms 2023-07-11 15:34:03,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:03,296 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=cd6e2f60b0da8f8c5799b62d7706eaa4, regionState=CLOSED 2023-07-11 15:34:03,296 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089643296"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089643296"}]},"ts":"1689089643296"} 2023-07-11 15:34:03,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:03,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:03,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing fd7708efb045c0f24fa78e8bb6bb52fe, disabling compactions & flushes 2023-07-11 15:34:03,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:03,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:03,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. after waiting 0 ms 2023-07-11 15:34:03,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:03,297 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=71e7c6fb7a2bb063cbd9512ed6aea44f, regionState=CLOSED 2023-07-11 15:34:03,298 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689089643297"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089643297"}]},"ts":"1689089643297"} 2023-07-11 15:34:03,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:34:03,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:34:03,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb. 2023-07-11 15:34:03,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 77000279dde6f7050ab496c7d8a1c2bb: 2023-07-11 15:34:03,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe. 2023-07-11 15:34:03,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for fd7708efb045c0f24fa78e8bb6bb52fe: 2023-07-11 15:34:03,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:03,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:03,307 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing c00838e1f90ddca3fb61abb8c34d98ea, disabling compactions & flushes 2023-07-11 15:34:03,307 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:03,307 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:03,307 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. after waiting 0 ms 2023-07-11 15:34:03,307 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:03,307 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=77000279dde6f7050ab496c7d8a1c2bb, regionState=CLOSED 2023-07-11 15:34:03,307 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089643307"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089643307"}]},"ts":"1689089643307"} 2023-07-11 15:34:03,308 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:03,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=151 2023-07-11 15:34:03,308 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=147 2023-07-11 15:34:03,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=151, state=SUCCESS; CloseRegionProcedure cd6e2f60b0da8f8c5799b62d7706eaa4, server=jenkins-hbase9.apache.org,43957,1689089616370 in 162 msec 2023-07-11 15:34:03,308 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=147, state=SUCCESS; CloseRegionProcedure 71e7c6fb7a2bb063cbd9512ed6aea44f, server=jenkins-hbase9.apache.org,45349,1689089620952 in 162 msec 2023-07-11 15:34:03,309 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=fd7708efb045c0f24fa78e8bb6bb52fe, regionState=CLOSED 2023-07-11 15:34:03,309 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089643309"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089643309"}]},"ts":"1689089643309"} 2023-07-11 15:34:03,310 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cd6e2f60b0da8f8c5799b62d7706eaa4, UNASSIGN in 177 msec 2023-07-11 15:34:03,310 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=71e7c6fb7a2bb063cbd9512ed6aea44f, UNASSIGN in 177 msec 2023-07-11 15:34:03,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=149 2023-07-11 15:34:03,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=149, state=SUCCESS; CloseRegionProcedure 77000279dde6f7050ab496c7d8a1c2bb, server=jenkins-hbase9.apache.org,43957,1689089616370 in 174 msec 2023-07-11 15:34:03,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=150 2023-07-11 15:34:03,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=150, state=SUCCESS; CloseRegionProcedure fd7708efb045c0f24fa78e8bb6bb52fe, server=jenkins-hbase9.apache.org,45349,1689089620952 in 175 msec 2023-07-11 15:34:03,312 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=77000279dde6f7050ab496c7d8a1c2bb, UNASSIGN in 180 msec 2023-07-11 15:34:03,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:34:03,312 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fd7708efb045c0f24fa78e8bb6bb52fe, UNASSIGN in 180 msec 2023-07-11 15:34:03,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea. 2023-07-11 15:34:03,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for c00838e1f90ddca3fb61abb8c34d98ea: 2023-07-11 15:34:03,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:03,314 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=c00838e1f90ddca3fb61abb8c34d98ea, regionState=CLOSED 2023-07-11 15:34:03,314 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689089643314"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089643314"}]},"ts":"1689089643314"} 2023-07-11 15:34:03,316 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=148 2023-07-11 15:34:03,316 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=148, state=SUCCESS; CloseRegionProcedure c00838e1f90ddca3fb61abb8c34d98ea, server=jenkins-hbase9.apache.org,43957,1689089616370 in 179 msec 2023-07-11 15:34:03,318 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=146 2023-07-11 15:34:03,318 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c00838e1f90ddca3fb61abb8c34d98ea, UNASSIGN in 185 msec 2023-07-11 15:34:03,318 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089643318"}]},"ts":"1689089643318"} 2023-07-11 15:34:03,319 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-11 15:34:03,320 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-11 15:34:03,322 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 197 msec 2023-07-11 15:34:03,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-11 15:34:03,430 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-11 15:34:03,430 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_999698547 2023-07-11 15:34:03,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_999698547 2023-07-11 15:34:03,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:03,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_999698547 2023-07-11 15:34:03,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:03,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:34:03,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-11 15:34:03,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_999698547, current retry=0 2023-07-11 15:34:03,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_999698547. 2023-07-11 15:34:03,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:03,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:03,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:03,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-11 15:34:03,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:34:03,442 INFO [Listener at localhost/45661] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-11 15:34:03,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable Group_testDisabledTableMove 2023-07-11 15:34:03,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:03,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 919 service: MasterService methodName: DisableTable size: 89 connection: 172.31.2.10:55202 deadline: 1689089703442, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-11 15:34:03,443 DEBUG [Listener at localhost/45661] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-11 15:34:03,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete Group_testDisabledTableMove 2023-07-11 15:34:03,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 15:34:03,447 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 15:34:03,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_999698547' 2023-07-11 15:34:03,447 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 15:34:03,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:03,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_999698547 2023-07-11 15:34:03,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:03,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:34:03,455 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:03,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-11 15:34:03,455 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:03,455 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:03,455 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:03,455 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:03,458 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f/recovered.edits] 2023-07-11 15:34:03,459 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea/recovered.edits] 2023-07-11 15:34:03,459 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb/recovered.edits] 2023-07-11 15:34:03,459 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe/recovered.edits] 2023-07-11 15:34:03,460 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4/f, FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4/recovered.edits] 2023-07-11 15:34:03,470 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb/recovered.edits/4.seqid 2023-07-11 15:34:03,470 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f/recovered.edits/4.seqid 2023-07-11 15:34:03,471 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe/recovered.edits/4.seqid 2023-07-11 15:34:03,471 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea/recovered.edits/4.seqid 2023-07-11 15:34:03,471 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/77000279dde6f7050ab496c7d8a1c2bb 2023-07-11 15:34:03,471 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/71e7c6fb7a2bb063cbd9512ed6aea44f 2023-07-11 15:34:03,472 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4/recovered.edits/4.seqid to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/archive/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4/recovered.edits/4.seqid 2023-07-11 15:34:03,472 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/fd7708efb045c0f24fa78e8bb6bb52fe 2023-07-11 15:34:03,472 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/c00838e1f90ddca3fb61abb8c34d98ea 2023-07-11 15:34:03,473 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/.tmp/data/default/Group_testDisabledTableMove/cd6e2f60b0da8f8c5799b62d7706eaa4 2023-07-11 15:34:03,473 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-11 15:34:03,476 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 15:34:03,478 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-11 15:34:03,483 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-11 15:34:03,484 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 15:34:03,484 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-11 15:34:03,484 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089643484"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:03,484 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089643484"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:03,484 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089643484"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:03,484 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089643484"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:03,484 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089643484"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:03,486 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-11 15:34:03,486 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 71e7c6fb7a2bb063cbd9512ed6aea44f, NAME => 'Group_testDisabledTableMove,,1689089642505.71e7c6fb7a2bb063cbd9512ed6aea44f.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => c00838e1f90ddca3fb61abb8c34d98ea, NAME => 'Group_testDisabledTableMove,aaaaa,1689089642505.c00838e1f90ddca3fb61abb8c34d98ea.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 77000279dde6f7050ab496c7d8a1c2bb, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689089642505.77000279dde6f7050ab496c7d8a1c2bb.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => fd7708efb045c0f24fa78e8bb6bb52fe, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689089642505.fd7708efb045c0f24fa78e8bb6bb52fe.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => cd6e2f60b0da8f8c5799b62d7706eaa4, NAME => 'Group_testDisabledTableMove,zzzzz,1689089642505.cd6e2f60b0da8f8c5799b62d7706eaa4.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-11 15:34:03,486 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-11 15:34:03,486 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689089643486"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:03,488 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-11 15:34:03,489 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 15:34:03,491 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 46 msec 2023-07-11 15:34:03,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-11 15:34:03,557 INFO [Listener at localhost/45661] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-11 15:34:03,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:03,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:03,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:03,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:03,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:03,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495] to rsgroup default 2023-07-11 15:34:03,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:03,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_999698547 2023-07-11 15:34:03,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:03,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:34:03,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_999698547, current retry=0 2023-07-11 15:34:03,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase9.apache.org,36133,1689089616857, jenkins-hbase9.apache.org,42495,1689089616669] are moved back to Group_testDisabledTableMove_999698547 2023-07-11 15:34:03,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_999698547 => default 2023-07-11 15:34:03,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:03,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_testDisabledTableMove_999698547 2023-07-11 15:34:03,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:03,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:03,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 15:34:03,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:03,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:03,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:03,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:03,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:03,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:03,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:03,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:03,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:03,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:03,585 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:03,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:03,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:03,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:03,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:03,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:03,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:03,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:03,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:34:03,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:03,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 953 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090843598, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:34:03,599 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:03,600 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:03,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:03,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:03,601 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:03,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:03,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:03,623 INFO [Listener at localhost/45661] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512 (was 510) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-146389156_17 at /127.0.0.1:44214 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2ad2b5e2-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1903062719_17 at /127.0.0.1:38188 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1705aebc-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=794 (was 773) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=451 (was 490), ProcessCount=177 (was 176) - ProcessCount LEAK? -, AvailableMemoryMB=6282 (was 6295) 2023-07-11 15:34:03,624 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-11 15:34:03,646 INFO [Listener at localhost/45661] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=512, OpenFileDescriptor=794, MaxFileDescriptor=60000, SystemLoadAverage=451, ProcessCount=176, AvailableMemoryMB=6280 2023-07-11 15:34:03,646 WARN [Listener at localhost/45661] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-11 15:34:03,646 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-11 15:34:03,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:03,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:03,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:03,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:03,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:03,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:03,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:03,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:03,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:03,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:03,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:03,662 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:03,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:03,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:03,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:03,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:03,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:03,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:03,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:03,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:44179] to rsgroup master 2023-07-11 15:34:03,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:03,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] ipc.CallRunner(144): callId: 981 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:55202 deadline: 1689090843680, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. 2023-07-11 15:34:03,681 WARN [Listener at localhost/45661] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:44179 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:03,683 INFO [Listener at localhost/45661] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:03,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:03,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:03,685 INFO [Listener at localhost/45661] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:36133, jenkins-hbase9.apache.org:42495, jenkins-hbase9.apache.org:43957, jenkins-hbase9.apache.org:45349], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:03,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:03,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44179] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:03,686 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-11 15:34:03,686 INFO [Listener at localhost/45661] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-11 15:34:03,686 DEBUG [Listener at localhost/45661] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7d54ff60 to 127.0.0.1:49791 2023-07-11 15:34:03,686 DEBUG [Listener at localhost/45661] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:03,687 DEBUG [Listener at localhost/45661] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-11 15:34:03,687 DEBUG [Listener at localhost/45661] util.JVMClusterUtil(257): Found active master hash=1744786448, stopped=false 2023-07-11 15:34:03,688 DEBUG [Listener at localhost/45661] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 15:34:03,688 DEBUG [Listener at localhost/45661] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 15:34:03,688 INFO [Listener at localhost/45661] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:34:03,690 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:03,690 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:03,690 INFO [Listener at localhost/45661] procedure2.ProcedureExecutor(629): Stopping 2023-07-11 15:34:03,690 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:03,690 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:03,690 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:03,690 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:03,691 DEBUG [Listener at localhost/45661] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x569c6504 to 127.0.0.1:49791 2023-07-11 15:34:03,691 DEBUG [Listener at localhost/45661] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:03,691 INFO [Listener at localhost/45661] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,43957,1689089616370' ***** 2023-07-11 15:34:03,691 INFO [Listener at localhost/45661] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:03,691 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:03,693 INFO [Listener at localhost/45661] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,42495,1689089616669' ***** 2023-07-11 15:34:03,693 INFO [Listener at localhost/45661] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:03,696 INFO [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:03,702 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:03,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:03,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:03,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:03,700 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:03,697 INFO [Listener at localhost/45661] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,36133,1689089616857' ***** 2023-07-11 15:34:03,706 INFO [Listener at localhost/45661] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:03,706 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,36133,1689089616857' ***** 2023-07-11 15:34:03,706 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(2311): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-07-11 15:34:03,706 INFO [Listener at localhost/45661] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,45349,1689089620952' ***** 2023-07-11 15:34:03,706 INFO [Listener at localhost/45661] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:03,706 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:03,706 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:03,712 INFO [RS:0;jenkins-hbase9:43957] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5eb331b4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:03,723 INFO [RS:3;jenkins-hbase9:45349] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@dfc4cbd{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:03,723 INFO [RS:0;jenkins-hbase9:43957] server.AbstractConnector(383): Stopped ServerConnector@3a7e07f7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:03,724 INFO [RS:0;jenkins-hbase9:43957] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:03,724 INFO [RS:3;jenkins-hbase9:45349] server.AbstractConnector(383): Stopped ServerConnector@4f8a6a12{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:03,724 INFO [RS:3;jenkins-hbase9:45349] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:03,724 INFO [RS:1;jenkins-hbase9:42495] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7fa26dec{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:03,725 INFO [RS:3;jenkins-hbase9:45349] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@30e55351{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:03,724 INFO [RS:0;jenkins-hbase9:43957] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5016aa5d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:03,724 INFO [RS:2;jenkins-hbase9:36133] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@15b217d2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:03,726 INFO [RS:0;jenkins-hbase9:43957] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69038eaf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:03,726 INFO [RS:1;jenkins-hbase9:42495] server.AbstractConnector(383): Stopped ServerConnector@59a4b1f9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:03,726 INFO [RS:1;jenkins-hbase9:42495] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:03,726 INFO [RS:2;jenkins-hbase9:36133] server.AbstractConnector(383): Stopped ServerConnector@51804693{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:03,727 INFO [RS:2;jenkins-hbase9:36133] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:03,726 INFO [RS:3;jenkins-hbase9:45349] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b8a7a95{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:03,728 INFO [RS:1;jenkins-hbase9:42495] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@256bb865{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:03,728 INFO [RS:2;jenkins-hbase9:36133] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7e20f29d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:03,730 INFO [RS:1;jenkins-hbase9:42495] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@328311bb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:03,730 INFO [RS:2;jenkins-hbase9:36133] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@727c7cf0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:03,730 INFO [RS:3;jenkins-hbase9:45349] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:03,730 INFO [RS:1;jenkins-hbase9:42495] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:03,731 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:03,731 INFO [RS:1;jenkins-hbase9:42495] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:03,731 INFO [RS:2;jenkins-hbase9:36133] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:03,731 INFO [RS:1;jenkins-hbase9:42495] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:03,731 INFO [RS:2;jenkins-hbase9:36133] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:03,731 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:03,731 INFO [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:34:03,731 INFO [RS:2;jenkins-hbase9:36133] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:03,731 DEBUG [RS:1;jenkins-hbase9:42495] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x14159c69 to 127.0.0.1:49791 2023-07-11 15:34:03,731 DEBUG [RS:1;jenkins-hbase9:42495] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:03,731 INFO [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,42495,1689089616669; all regions closed. 2023-07-11 15:34:03,732 INFO [RS:0;jenkins-hbase9:43957] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:03,732 INFO [RS:0;jenkins-hbase9:43957] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:03,732 INFO [RS:0;jenkins-hbase9:43957] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:03,732 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(3305): Received CLOSE for e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:03,731 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:03,732 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:03,732 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:03,732 DEBUG [RS:0;jenkins-hbase9:43957] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5c3745f1 to 127.0.0.1:49791 2023-07-11 15:34:03,733 DEBUG [RS:0;jenkins-hbase9:43957] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:03,733 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-11 15:34:03,733 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1478): Online Regions={e1d75a5b638d7310f1fb4df8d75d5f7b=testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b.} 2023-07-11 15:34:03,733 DEBUG [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1504): Waiting on e1d75a5b638d7310f1fb4df8d75d5f7b 2023-07-11 15:34:03,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e1d75a5b638d7310f1fb4df8d75d5f7b, disabling compactions & flushes 2023-07-11 15:34:03,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:03,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:03,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. after waiting 0 ms 2023-07-11 15:34:03,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:03,735 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:03,735 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:03,736 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:03,736 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:03,731 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:34:03,731 INFO [RS:3;jenkins-hbase9:45349] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:03,736 INFO [RS:3;jenkins-hbase9:45349] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:03,736 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(3305): Received CLOSE for 203d287260feed5f883777745504f77e 2023-07-11 15:34:03,737 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(3305): Received CLOSE for 48c2bf7782ee61fbc67dfe7aa5f38abc 2023-07-11 15:34:03,737 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(3305): Received CLOSE for db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:34:03,737 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:03,737 DEBUG [RS:3;jenkins-hbase9:45349] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0bc50a61 to 127.0.0.1:49791 2023-07-11 15:34:03,737 DEBUG [RS:3;jenkins-hbase9:45349] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:03,737 INFO [RS:3;jenkins-hbase9:45349] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:03,737 INFO [RS:3;jenkins-hbase9:45349] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:03,737 INFO [RS:3;jenkins-hbase9:45349] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:03,737 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-11 15:34:03,736 DEBUG [RS:2;jenkins-hbase9:36133] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x097c2907 to 127.0.0.1:49791 2023-07-11 15:34:03,743 DEBUG [RS:2;jenkins-hbase9:36133] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:03,744 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,36133,1689089616857; all regions closed. 2023-07-11 15:34:03,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 203d287260feed5f883777745504f77e, disabling compactions & flushes 2023-07-11 15:34:03,756 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:03,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:03,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. after waiting 0 ms 2023-07-11 15:34:03,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:03,757 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-11 15:34:03,757 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1478): Online Regions={203d287260feed5f883777745504f77e=unmovedTable,,1689089638545.203d287260feed5f883777745504f77e., 48c2bf7782ee61fbc67dfe7aa5f38abc=hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc., 1588230740=hbase:meta,,1.1588230740, db11ce5f2f749a24653755c2ee31ecfe=hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe.} 2023-07-11 15:34:03,758 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 15:34:03,761 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 15:34:03,761 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 15:34:03,761 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 15:34:03,762 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 15:34:03,760 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1504): Waiting on 1588230740, 203d287260feed5f883777745504f77e, 48c2bf7782ee61fbc67dfe7aa5f38abc, db11ce5f2f749a24653755c2ee31ecfe 2023-07-11 15:34:03,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/testRename/e1d75a5b638d7310f1fb4df8d75d5f7b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-11 15:34:03,763 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=77.76 KB heapSize=122.41 KB 2023-07-11 15:34:03,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:03,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e1d75a5b638d7310f1fb4df8d75d5f7b: 2023-07-11 15:34:03,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689089636869.e1d75a5b638d7310f1fb4df8d75d5f7b. 2023-07-11 15:34:03,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/default/unmovedTable/203d287260feed5f883777745504f77e/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-11 15:34:03,773 DEBUG [RS:1;jenkins-hbase9:42495] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs 2023-07-11 15:34:03,773 INFO [RS:1;jenkins-hbase9:42495] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C42495%2C1689089616669.meta:.meta(num 1689089619310) 2023-07-11 15:34:03,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:03,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 203d287260feed5f883777745504f77e: 2023-07-11 15:34:03,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689089638545.203d287260feed5f883777745504f77e. 2023-07-11 15:34:03,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 48c2bf7782ee61fbc67dfe7aa5f38abc, disabling compactions & flushes 2023-07-11 15:34:03,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:34:03,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:34:03,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. after waiting 0 ms 2023-07-11 15:34:03,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:34:03,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 48c2bf7782ee61fbc67dfe7aa5f38abc 1/1 column families, dataSize=27.08 KB heapSize=44.70 KB 2023-07-11 15:34:03,780 DEBUG [RS:2;jenkins-hbase9:36133] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs 2023-07-11 15:34:03,781 INFO [RS:2;jenkins-hbase9:36133] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C36133%2C1689089616857:(num 1689089619089) 2023-07-11 15:34:03,781 DEBUG [RS:2;jenkins-hbase9:36133] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:03,781 INFO [RS:2;jenkins-hbase9:36133] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:03,803 INFO [RS:2;jenkins-hbase9:36133] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:03,803 INFO [RS:2;jenkins-hbase9:36133] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:03,803 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:03,803 INFO [RS:2;jenkins-hbase9:36133] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:03,804 INFO [RS:2;jenkins-hbase9:36133] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:03,806 INFO [RS:2;jenkins-hbase9:36133] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:36133 2023-07-11 15:34:03,807 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-11 15:34:03,807 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-11 15:34:03,814 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-11 15:34:03,814 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-11 15:34:03,816 DEBUG [RS:1;jenkins-hbase9:42495] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs 2023-07-11 15:34:03,816 INFO [RS:1;jenkins-hbase9:42495] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C42495%2C1689089616669:(num 1689089619088) 2023-07-11 15:34:03,816 DEBUG [RS:1;jenkins-hbase9:42495] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:03,816 INFO [RS:1;jenkins-hbase9:42495] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:03,818 INFO [RS:1;jenkins-hbase9:42495] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:03,820 INFO [RS:1;jenkins-hbase9:42495] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:03,820 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:03,820 INFO [RS:1;jenkins-hbase9:42495] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:03,820 INFO [RS:1;jenkins-hbase9:42495] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:03,823 INFO [RS:1;jenkins-hbase9:42495] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:42495 2023-07-11 15:34:03,834 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:34:03,835 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:03,835 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:03,835 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:34:03,835 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:03,838 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:34:03,838 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:03,838 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42495,1689089616669 2023-07-11 15:34:03,841 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:34:03,842 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:03,842 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:34:03,842 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:34:03,843 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,36133,1689089616857 2023-07-11 15:34:03,846 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=71.95 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/info/f9c945dadbea48e393c9f376ca051894 2023-07-11 15:34:03,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.08 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/.tmp/m/12e2a0b0819b43879a302835a6201477 2023-07-11 15:34:03,859 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f9c945dadbea48e393c9f376ca051894 2023-07-11 15:34:03,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 12e2a0b0819b43879a302835a6201477 2023-07-11 15:34:03,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/.tmp/m/12e2a0b0819b43879a302835a6201477 as hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/m/12e2a0b0819b43879a302835a6201477 2023-07-11 15:34:03,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 12e2a0b0819b43879a302835a6201477 2023-07-11 15:34:03,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/m/12e2a0b0819b43879a302835a6201477, entries=28, sequenceid=101, filesize=6.1 K 2023-07-11 15:34:03,890 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.08 KB/27727, heapSize ~44.68 KB/45752, currentSize=0 B/0 for 48c2bf7782ee61fbc67dfe7aa5f38abc in 110ms, sequenceid=101, compaction requested=false 2023-07-11 15:34:03,916 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/rep_barrier/7dae71ba850142dd9191856a70121be3 2023-07-11 15:34:03,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/rsgroup/48c2bf7782ee61fbc67dfe7aa5f38abc/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-11 15:34:03,919 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:34:03,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:34:03,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 48c2bf7782ee61fbc67dfe7aa5f38abc: 2023-07-11 15:34:03,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689089619705.48c2bf7782ee61fbc67dfe7aa5f38abc. 2023-07-11 15:34:03,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing db11ce5f2f749a24653755c2ee31ecfe, disabling compactions & flushes 2023-07-11 15:34:03,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:34:03,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:34:03,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. after waiting 0 ms 2023-07-11 15:34:03,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:34:03,925 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7dae71ba850142dd9191856a70121be3 2023-07-11 15:34:03,934 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,43957,1689089616370; all regions closed. 2023-07-11 15:34:03,936 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,42495,1689089616669] 2023-07-11 15:34:03,936 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,42495,1689089616669; numProcessing=1 2023-07-11 15:34:03,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/namespace/db11ce5f2f749a24653755c2ee31ecfe/recovered.edits/15.seqid, newMaxSeqId=15, maxSeqId=12 2023-07-11 15:34:03,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:34:03,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for db11ce5f2f749a24653755c2ee31ecfe: 2023-07-11 15:34:03,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689089619688.db11ce5f2f749a24653755c2ee31ecfe. 2023-07-11 15:34:03,962 DEBUG [RS:0;jenkins-hbase9:43957] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs 2023-07-11 15:34:03,963 INFO [RS:0;jenkins-hbase9:43957] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C43957%2C1689089616370:(num 1689089619089) 2023-07-11 15:34:03,963 DEBUG [RS:0;jenkins-hbase9:43957] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:03,963 INFO [RS:0;jenkins-hbase9:43957] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:03,963 INFO [RS:0;jenkins-hbase9:43957] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:03,963 INFO [RS:0;jenkins-hbase9:43957] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:03,963 DEBUG [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-11 15:34:03,963 INFO [RS:0;jenkins-hbase9:43957] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:03,963 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:03,963 INFO [RS:0;jenkins-hbase9:43957] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:03,965 INFO [RS:0;jenkins-hbase9:43957] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:43957 2023-07-11 15:34:03,966 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/table/1b48fed0c5b74ef5ae329ce308a6d85b 2023-07-11 15:34:03,974 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1b48fed0c5b74ef5ae329ce308a6d85b 2023-07-11 15:34:03,975 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/info/f9c945dadbea48e393c9f376ca051894 as hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info/f9c945dadbea48e393c9f376ca051894 2023-07-11 15:34:03,983 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f9c945dadbea48e393c9f376ca051894 2023-07-11 15:34:03,983 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/info/f9c945dadbea48e393c9f376ca051894, entries=97, sequenceid=214, filesize=16.0 K 2023-07-11 15:34:03,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/rep_barrier/7dae71ba850142dd9191856a70121be3 as hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/rep_barrier/7dae71ba850142dd9191856a70121be3 2023-07-11 15:34:04,014 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7dae71ba850142dd9191856a70121be3 2023-07-11 15:34:04,014 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/rep_barrier/7dae71ba850142dd9191856a70121be3, entries=18, sequenceid=214, filesize=6.9 K 2023-07-11 15:34:04,018 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/.tmp/table/1b48fed0c5b74ef5ae329ce308a6d85b as hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table/1b48fed0c5b74ef5ae329ce308a6d85b 2023-07-11 15:34:04,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1b48fed0c5b74ef5ae329ce308a6d85b 2023-07-11 15:34:04,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/table/1b48fed0c5b74ef5ae329ce308a6d85b, entries=27, sequenceid=214, filesize=7.2 K 2023-07-11 15:34:04,030 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~77.76 KB/79623, heapSize ~122.36 KB/125296, currentSize=0 B/0 for 1588230740 in 268ms, sequenceid=214, compaction requested=false 2023-07-11 15:34:04,039 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:04,039 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,43957,1689089616370 2023-07-11 15:34:04,039 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:04,040 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,42495,1689089616669 already deleted, retry=false 2023-07-11 15:34:04,040 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,42495,1689089616669 expired; onlineServers=3 2023-07-11 15:34:04,040 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,36133,1689089616857] 2023-07-11 15:34:04,040 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,36133,1689089616857; numProcessing=2 2023-07-11 15:34:04,042 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,36133,1689089616857 already deleted, retry=false 2023-07-11 15:34:04,042 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,36133,1689089616857 expired; onlineServers=2 2023-07-11 15:34:04,042 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,43957,1689089616370] 2023-07-11 15:34:04,043 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,43957,1689089616370; numProcessing=3 2023-07-11 15:34:04,043 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,43957,1689089616370 already deleted, retry=false 2023-07-11 15:34:04,044 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,43957,1689089616370 expired; onlineServers=1 2023-07-11 15:34:04,052 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/data/hbase/meta/1588230740/recovered.edits/217.seqid, newMaxSeqId=217, maxSeqId=19 2023-07-11 15:34:04,052 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:34:04,053 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 15:34:04,053 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 15:34:04,053 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-11 15:34:04,091 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,091 INFO [RS:1;jenkins-hbase9:42495] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,42495,1689089616669; zookeeper connection closed. 2023-07-11 15:34:04,091 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:42495-0x10154f6e2600002, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,091 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5eff379f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5eff379f 2023-07-11 15:34:04,163 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,45349,1689089620952; all regions closed. 2023-07-11 15:34:04,168 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/WALs/jenkins-hbase9.apache.org,45349,1689089620952/jenkins-hbase9.apache.org%2C45349%2C1689089620952.meta.1689089622404.meta not finished, retry = 0 2023-07-11 15:34:04,191 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,191 INFO [RS:2;jenkins-hbase9:36133] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,36133,1689089616857; zookeeper connection closed. 2023-07-11 15:34:04,191 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:36133-0x10154f6e2600003, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,191 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5a886d56] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5a886d56 2023-07-11 15:34:04,271 DEBUG [RS:3;jenkins-hbase9:45349] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs 2023-07-11 15:34:04,271 INFO [RS:3;jenkins-hbase9:45349] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C45349%2C1689089620952.meta:.meta(num 1689089622404) 2023-07-11 15:34:04,283 DEBUG [RS:3;jenkins-hbase9:45349] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/oldWALs 2023-07-11 15:34:04,283 INFO [RS:3;jenkins-hbase9:45349] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C45349%2C1689089620952:(num 1689089621496) 2023-07-11 15:34:04,283 DEBUG [RS:3;jenkins-hbase9:45349] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:04,283 INFO [RS:3;jenkins-hbase9:45349] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:04,284 INFO [RS:3;jenkins-hbase9:45349] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:04,284 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:04,285 INFO [RS:3;jenkins-hbase9:45349] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:45349 2023-07-11 15:34:04,286 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,45349,1689089620952 2023-07-11 15:34:04,286 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:04,288 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,45349,1689089620952] 2023-07-11 15:34:04,288 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,45349,1689089620952; numProcessing=4 2023-07-11 15:34:04,289 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,45349,1689089620952 already deleted, retry=false 2023-07-11 15:34:04,289 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,45349,1689089620952 expired; onlineServers=0 2023-07-11 15:34:04,289 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,44179,1689089614389' ***** 2023-07-11 15:34:04,289 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-11 15:34:04,290 DEBUG [M:0;jenkins-hbase9:44179] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26c25e2b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:04,290 INFO [M:0;jenkins-hbase9:44179] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:04,292 INFO [RS:0;jenkins-hbase9:43957] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,43957,1689089616370; zookeeper connection closed. 2023-07-11 15:34:04,292 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,292 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:43957-0x10154f6e2600001, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,292 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@40b386d2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@40b386d2 2023-07-11 15:34:04,292 INFO [M:0;jenkins-hbase9:44179] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@33f992f8{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 15:34:04,292 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:04,293 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:04,293 INFO [M:0;jenkins-hbase9:44179] server.AbstractConnector(383): Stopped ServerConnector@4eb5bcb4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:04,293 INFO [M:0;jenkins-hbase9:44179] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:04,293 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:04,294 INFO [M:0;jenkins-hbase9:44179] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@758a4850{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:04,294 INFO [M:0;jenkins-hbase9:44179] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d059f15{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:04,295 INFO [M:0;jenkins-hbase9:44179] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,44179,1689089614389 2023-07-11 15:34:04,295 INFO [M:0;jenkins-hbase9:44179] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,44179,1689089614389; all regions closed. 2023-07-11 15:34:04,295 DEBUG [M:0;jenkins-hbase9:44179] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:04,295 INFO [M:0;jenkins-hbase9:44179] master.HMaster(1491): Stopping master jetty server 2023-07-11 15:34:04,295 INFO [M:0;jenkins-hbase9:44179] server.AbstractConnector(383): Stopped ServerConnector@46817a75{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:04,296 DEBUG [M:0;jenkins-hbase9:44179] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-11 15:34:04,296 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-11 15:34:04,296 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089618455] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089618455,5,FailOnTimeoutGroup] 2023-07-11 15:34:04,296 DEBUG [M:0;jenkins-hbase9:44179] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-11 15:34:04,296 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089618458] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089618458,5,FailOnTimeoutGroup] 2023-07-11 15:34:04,296 INFO [M:0;jenkins-hbase9:44179] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-11 15:34:04,296 INFO [M:0;jenkins-hbase9:44179] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-11 15:34:04,297 INFO [M:0;jenkins-hbase9:44179] hbase.ChoreService(369): Chore service for: master/jenkins-hbase9:0 had [] on shutdown 2023-07-11 15:34:04,297 DEBUG [M:0;jenkins-hbase9:44179] master.HMaster(1512): Stopping service threads 2023-07-11 15:34:04,297 INFO [M:0;jenkins-hbase9:44179] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-11 15:34:04,297 ERROR [M:0;jenkins-hbase9:44179] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-11 15:34:04,298 INFO [M:0;jenkins-hbase9:44179] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-11 15:34:04,298 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-11 15:34:04,298 DEBUG [M:0;jenkins-hbase9:44179] zookeeper.ZKUtil(398): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-11 15:34:04,298 WARN [M:0;jenkins-hbase9:44179] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-11 15:34:04,298 INFO [M:0;jenkins-hbase9:44179] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-11 15:34:04,299 INFO [M:0;jenkins-hbase9:44179] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-11 15:34:04,299 DEBUG [M:0;jenkins-hbase9:44179] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 15:34:04,299 INFO [M:0;jenkins-hbase9:44179] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:04,299 DEBUG [M:0;jenkins-hbase9:44179] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:04,299 DEBUG [M:0;jenkins-hbase9:44179] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 15:34:04,299 DEBUG [M:0;jenkins-hbase9:44179] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:04,299 INFO [M:0;jenkins-hbase9:44179] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=528.92 KB heapSize=633.16 KB 2023-07-11 15:34:04,313 INFO [M:0;jenkins-hbase9:44179] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=528.92 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/08a540b574e444a08ff01f333092d029 2023-07-11 15:34:04,319 DEBUG [M:0;jenkins-hbase9:44179] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/08a540b574e444a08ff01f333092d029 as hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/08a540b574e444a08ff01f333092d029 2023-07-11 15:34:04,325 INFO [M:0;jenkins-hbase9:44179] regionserver.HStore(1080): Added hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/08a540b574e444a08ff01f333092d029, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-11 15:34:04,326 INFO [M:0;jenkins-hbase9:44179] regionserver.HRegion(2948): Finished flush of dataSize ~528.92 KB/541611, heapSize ~633.14 KB/648336, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=1176, compaction requested=false 2023-07-11 15:34:04,327 INFO [M:0;jenkins-hbase9:44179] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:04,327 DEBUG [M:0;jenkins-hbase9:44179] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 15:34:04,334 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:04,334 INFO [M:0;jenkins-hbase9:44179] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-11 15:34:04,334 INFO [M:0;jenkins-hbase9:44179] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:44179 2023-07-11 15:34:04,336 DEBUG [M:0;jenkins-hbase9:44179] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase9.apache.org,44179,1689089614389 already deleted, retry=false 2023-07-11 15:34:04,392 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,392 INFO [RS:3;jenkins-hbase9:45349] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,45349,1689089620952; zookeeper connection closed. 2023-07-11 15:34:04,392 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): regionserver:45349-0x10154f6e260000b, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,392 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@356bbc4c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@356bbc4c 2023-07-11 15:34:04,393 INFO [Listener at localhost/45661] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-11 15:34:04,492 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,492 INFO [M:0;jenkins-hbase9:44179] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,44179,1689089614389; zookeeper connection closed. 2023-07-11 15:34:04,492 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): master:44179-0x10154f6e2600000, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:04,494 WARN [Listener at localhost/45661] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 15:34:04,498 INFO [Listener at localhost/45661] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:04,571 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:34:04,571 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 15:34:04,571 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 15:34:04,602 WARN [BP-369846627-172.31.2.10-1689089610610 heartbeating to localhost/127.0.0.1:43853] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 15:34:04,602 WARN [BP-369846627-172.31.2.10-1689089610610 heartbeating to localhost/127.0.0.1:43853] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-369846627-172.31.2.10-1689089610610 (Datanode Uuid c471a55d-9d41-4630-9b0d-cc9662e3bd64) service to localhost/127.0.0.1:43853 2023-07-11 15:34:04,603 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data5/current/BP-369846627-172.31.2.10-1689089610610] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:04,604 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data6/current/BP-369846627-172.31.2.10-1689089610610] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:04,606 WARN [Listener at localhost/45661] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 15:34:04,611 INFO [Listener at localhost/45661] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:04,714 WARN [BP-369846627-172.31.2.10-1689089610610 heartbeating to localhost/127.0.0.1:43853] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 15:34:04,714 WARN [BP-369846627-172.31.2.10-1689089610610 heartbeating to localhost/127.0.0.1:43853] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-369846627-172.31.2.10-1689089610610 (Datanode Uuid 384fbc8b-0e27-4fb5-8fd6-b2327a4479cf) service to localhost/127.0.0.1:43853 2023-07-11 15:34:04,715 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data3/current/BP-369846627-172.31.2.10-1689089610610] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:04,715 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data4/current/BP-369846627-172.31.2.10-1689089610610] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:04,717 WARN [Listener at localhost/45661] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 15:34:04,721 INFO [Listener at localhost/45661] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:04,825 WARN [BP-369846627-172.31.2.10-1689089610610 heartbeating to localhost/127.0.0.1:43853] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 15:34:04,825 WARN [BP-369846627-172.31.2.10-1689089610610 heartbeating to localhost/127.0.0.1:43853] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-369846627-172.31.2.10-1689089610610 (Datanode Uuid 8e797f1a-c0f5-4c95-bc13-30e2cd63813a) service to localhost/127.0.0.1:43853 2023-07-11 15:34:04,826 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data1/current/BP-369846627-172.31.2.10-1689089610610] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:04,826 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/cluster_b45952ef-c7fd-0191-a03c-21e913dce0ec/dfs/data/data2/current/BP-369846627-172.31.2.10-1689089610610] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:04,858 INFO [Listener at localhost/45661] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:04,977 INFO [Listener at localhost/45661] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-11 15:34:05,027 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-11 15:34:05,027 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-11 15:34:05,027 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.log.dir so I do NOT create it in target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81 2023-07-11 15:34:05,027 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/57b5e18c-0c29-f523-9545-50091d35fe53/hadoop.tmp.dir so I do NOT create it in target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81 2023-07-11 15:34:05,027 INFO [Listener at localhost/45661] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e, deleteOnExit=true 2023-07-11 15:34:05,027 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-11 15:34:05,027 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/test.cache.data in system properties and HBase conf 2023-07-11 15:34:05,027 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.tmp.dir in system properties and HBase conf 2023-07-11 15:34:05,028 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir in system properties and HBase conf 2023-07-11 15:34:05,028 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-11 15:34:05,028 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-11 15:34:05,028 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-11 15:34:05,028 DEBUG [Listener at localhost/45661] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-11 15:34:05,028 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-11 15:34:05,028 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-11 15:34:05,028 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-11 15:34:05,028 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/nfs.dump.dir in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/java.io.tmpdir in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-11 15:34:05,029 INFO [Listener at localhost/45661] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-11 15:34:05,034 WARN [Listener at localhost/45661] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 15:34:05,034 WARN [Listener at localhost/45661] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 15:34:05,074 DEBUG [Listener at localhost/45661-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10154f6e260000a, quorum=127.0.0.1:49791, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-11 15:34:05,075 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10154f6e260000a, quorum=127.0.0.1:49791, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-11 15:34:05,085 WARN [Listener at localhost/45661] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:34:05,087 INFO [Listener at localhost/45661] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:34:05,094 INFO [Listener at localhost/45661] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/java.io.tmpdir/Jetty_localhost_36487_hdfs____.7s01uu/webapp 2023-07-11 15:34:05,189 INFO [Listener at localhost/45661] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36487 2023-07-11 15:34:05,193 WARN [Listener at localhost/45661] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 15:34:05,193 WARN [Listener at localhost/45661] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 15:34:05,239 WARN [Listener at localhost/46437] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:34:05,256 WARN [Listener at localhost/46437] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 15:34:05,259 WARN [Listener at localhost/46437] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:34:05,260 INFO [Listener at localhost/46437] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:34:05,266 INFO [Listener at localhost/46437] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/java.io.tmpdir/Jetty_localhost_45749_datanode____.2p4j3y/webapp 2023-07-11 15:34:05,370 INFO [Listener at localhost/46437] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45749 2023-07-11 15:34:05,380 WARN [Listener at localhost/44879] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:34:05,406 WARN [Listener at localhost/44879] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-11 15:34:05,478 WARN [Listener at localhost/44879] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 15:34:05,481 WARN [Listener at localhost/44879] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:34:05,483 INFO [Listener at localhost/44879] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:34:05,488 INFO [Listener at localhost/44879] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/java.io.tmpdir/Jetty_localhost_39943_datanode____k4j9rt/webapp 2023-07-11 15:34:05,527 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x460e9e6024292df2: Processing first storage report for DS-34f566c8-e6c2-4554-b23c-7314f00a759b from datanode f73478c7-6fb7-456d-99d3-0399cb198ee4 2023-07-11 15:34:05,527 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x460e9e6024292df2: from storage DS-34f566c8-e6c2-4554-b23c-7314f00a759b node DatanodeRegistration(127.0.0.1:33947, datanodeUuid=f73478c7-6fb7-456d-99d3-0399cb198ee4, infoPort=35701, infoSecurePort=0, ipcPort=44879, storageInfo=lv=-57;cid=testClusterID;nsid=670843779;c=1689089645037), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:05,527 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x460e9e6024292df2: Processing first storage report for DS-49603ca6-eb1e-4f2f-8043-e79a0f1156bc from datanode f73478c7-6fb7-456d-99d3-0399cb198ee4 2023-07-11 15:34:05,527 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x460e9e6024292df2: from storage DS-49603ca6-eb1e-4f2f-8043-e79a0f1156bc node DatanodeRegistration(127.0.0.1:33947, datanodeUuid=f73478c7-6fb7-456d-99d3-0399cb198ee4, infoPort=35701, infoSecurePort=0, ipcPort=44879, storageInfo=lv=-57;cid=testClusterID;nsid=670843779;c=1689089645037), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-11 15:34:05,636 INFO [Listener at localhost/44879] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39943 2023-07-11 15:34:05,645 WARN [Listener at localhost/37111] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:34:05,680 WARN [Listener at localhost/37111] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 15:34:05,683 WARN [Listener at localhost/37111] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:34:05,685 INFO [Listener at localhost/37111] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:34:05,694 INFO [Listener at localhost/37111] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/java.io.tmpdir/Jetty_localhost_37511_datanode____.rxf1ig/webapp 2023-07-11 15:34:05,777 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5d79caa369ad63ec: Processing first storage report for DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9 from datanode f8077ecc-8adb-407e-8e12-d868a9abb3a8 2023-07-11 15:34:05,777 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5d79caa369ad63ec: from storage DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9 node DatanodeRegistration(127.0.0.1:41079, datanodeUuid=f8077ecc-8adb-407e-8e12-d868a9abb3a8, infoPort=33899, infoSecurePort=0, ipcPort=37111, storageInfo=lv=-57;cid=testClusterID;nsid=670843779;c=1689089645037), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:05,777 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5d79caa369ad63ec: Processing first storage report for DS-92a5a01e-e0b8-43b5-9912-d76032608d47 from datanode f8077ecc-8adb-407e-8e12-d868a9abb3a8 2023-07-11 15:34:05,777 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5d79caa369ad63ec: from storage DS-92a5a01e-e0b8-43b5-9912-d76032608d47 node DatanodeRegistration(127.0.0.1:41079, datanodeUuid=f8077ecc-8adb-407e-8e12-d868a9abb3a8, infoPort=33899, infoSecurePort=0, ipcPort=37111, storageInfo=lv=-57;cid=testClusterID;nsid=670843779;c=1689089645037), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:05,808 INFO [Listener at localhost/37111] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37511 2023-07-11 15:34:05,819 WARN [Listener at localhost/36297] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:34:05,925 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf8018482e56f140e: Processing first storage report for DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423 from datanode 4a7b664a-1079-445e-b872-80b99f3b7f7f 2023-07-11 15:34:05,925 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf8018482e56f140e: from storage DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423 node DatanodeRegistration(127.0.0.1:36269, datanodeUuid=4a7b664a-1079-445e-b872-80b99f3b7f7f, infoPort=42247, infoSecurePort=0, ipcPort=36297, storageInfo=lv=-57;cid=testClusterID;nsid=670843779;c=1689089645037), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:05,926 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf8018482e56f140e: Processing first storage report for DS-304babd7-56d3-4846-942d-3511c3fc0e36 from datanode 4a7b664a-1079-445e-b872-80b99f3b7f7f 2023-07-11 15:34:05,926 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf8018482e56f140e: from storage DS-304babd7-56d3-4846-942d-3511c3fc0e36 node DatanodeRegistration(127.0.0.1:36269, datanodeUuid=4a7b664a-1079-445e-b872-80b99f3b7f7f, infoPort=42247, infoSecurePort=0, ipcPort=36297, storageInfo=lv=-57;cid=testClusterID;nsid=670843779;c=1689089645037), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:05,936 DEBUG [Listener at localhost/36297] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81 2023-07-11 15:34:05,941 INFO [Listener at localhost/36297] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e/zookeeper_0, clientPort=51551, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-11 15:34:05,943 INFO [Listener at localhost/36297] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51551 2023-07-11 15:34:05,943 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:05,944 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:05,966 INFO [Listener at localhost/36297] util.FSUtils(471): Created version file at hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026 with version=8 2023-07-11 15:34:05,967 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/hbase-staging 2023-07-11 15:34:05,968 DEBUG [Listener at localhost/36297] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-11 15:34:05,968 DEBUG [Listener at localhost/36297] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-11 15:34:05,968 DEBUG [Listener at localhost/36297] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-11 15:34:05,968 DEBUG [Listener at localhost/36297] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-11 15:34:05,969 INFO [Listener at localhost/36297] client.ConnectionUtils(127): master/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:34:05,969 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:05,969 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:05,969 INFO [Listener at localhost/36297] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:34:05,969 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:05,969 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:34:05,969 INFO [Listener at localhost/36297] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:34:05,970 INFO [Listener at localhost/36297] ipc.NettyRpcServer(120): Bind to /172.31.2.10:38729 2023-07-11 15:34:05,971 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:05,972 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:05,973 INFO [Listener at localhost/36297] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38729 connecting to ZooKeeper ensemble=127.0.0.1:51551 2023-07-11 15:34:05,981 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:387290x0, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:05,981 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38729-0x10154f761120000 connected 2023-07-11 15:34:06,006 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:06,006 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:06,007 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:34:06,011 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38729 2023-07-11 15:34:06,012 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38729 2023-07-11 15:34:06,013 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38729 2023-07-11 15:34:06,014 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38729 2023-07-11 15:34:06,014 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38729 2023-07-11 15:34:06,018 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:34:06,019 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:34:06,019 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:34:06,019 INFO [Listener at localhost/36297] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-11 15:34:06,020 INFO [Listener at localhost/36297] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:34:06,020 INFO [Listener at localhost/36297] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:34:06,020 INFO [Listener at localhost/36297] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:34:06,020 INFO [Listener at localhost/36297] http.HttpServer(1146): Jetty bound to port 34203 2023-07-11 15:34:06,021 INFO [Listener at localhost/36297] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:06,030 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,030 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@611aa7c8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:34:06,031 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,031 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e61a861{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:34:06,163 INFO [Listener at localhost/36297] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:34:06,165 INFO [Listener at localhost/36297] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:34:06,165 INFO [Listener at localhost/36297] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:34:06,165 INFO [Listener at localhost/36297] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 15:34:06,167 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,168 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@30b9e023{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/java.io.tmpdir/jetty-0_0_0_0-34203-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5694268262004267788/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 15:34:06,170 INFO [Listener at localhost/36297] server.AbstractConnector(333): Started ServerConnector@627c5fb{HTTP/1.1, (http/1.1)}{0.0.0.0:34203} 2023-07-11 15:34:06,170 INFO [Listener at localhost/36297] server.Server(415): Started @37664ms 2023-07-11 15:34:06,170 INFO [Listener at localhost/36297] master.HMaster(444): hbase.rootdir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026, hbase.cluster.distributed=false 2023-07-11 15:34:06,187 INFO [Listener at localhost/36297] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:34:06,188 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:06,188 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:06,188 INFO [Listener at localhost/36297] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:34:06,188 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:06,188 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:34:06,188 INFO [Listener at localhost/36297] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:34:06,189 INFO [Listener at localhost/36297] ipc.NettyRpcServer(120): Bind to /172.31.2.10:40917 2023-07-11 15:34:06,189 INFO [Listener at localhost/36297] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:34:06,195 DEBUG [Listener at localhost/36297] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:34:06,195 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:06,197 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:06,199 INFO [Listener at localhost/36297] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40917 connecting to ZooKeeper ensemble=127.0.0.1:51551 2023-07-11 15:34:06,203 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:409170x0, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:06,204 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): regionserver:409170x0, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:06,205 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40917-0x10154f761120001 connected 2023-07-11 15:34:06,206 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:06,208 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:34:06,215 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40917 2023-07-11 15:34:06,215 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40917 2023-07-11 15:34:06,217 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40917 2023-07-11 15:34:06,222 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40917 2023-07-11 15:34:06,222 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40917 2023-07-11 15:34:06,224 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:34:06,224 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:34:06,225 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:34:06,225 INFO [Listener at localhost/36297] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:34:06,225 INFO [Listener at localhost/36297] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:34:06,225 INFO [Listener at localhost/36297] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:34:06,226 INFO [Listener at localhost/36297] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:34:06,227 INFO [Listener at localhost/36297] http.HttpServer(1146): Jetty bound to port 40097 2023-07-11 15:34:06,227 INFO [Listener at localhost/36297] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:06,238 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,238 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39385d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:34:06,239 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,239 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7acb83b4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:34:06,364 INFO [Listener at localhost/36297] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:34:06,365 INFO [Listener at localhost/36297] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:34:06,366 INFO [Listener at localhost/36297] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:34:06,366 INFO [Listener at localhost/36297] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:34:06,367 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,367 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1c33904b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/java.io.tmpdir/jetty-0_0_0_0-40097-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5213971042088684849/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:06,369 INFO [Listener at localhost/36297] server.AbstractConnector(333): Started ServerConnector@5c60221f{HTTP/1.1, (http/1.1)}{0.0.0.0:40097} 2023-07-11 15:34:06,369 INFO [Listener at localhost/36297] server.Server(415): Started @37862ms 2023-07-11 15:34:06,381 INFO [Listener at localhost/36297] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:34:06,381 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:06,381 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:06,381 INFO [Listener at localhost/36297] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:34:06,381 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:06,381 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:34:06,382 INFO [Listener at localhost/36297] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:34:06,382 INFO [Listener at localhost/36297] ipc.NettyRpcServer(120): Bind to /172.31.2.10:43971 2023-07-11 15:34:06,383 INFO [Listener at localhost/36297] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:34:06,385 DEBUG [Listener at localhost/36297] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:34:06,385 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:06,386 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:06,387 INFO [Listener at localhost/36297] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43971 connecting to ZooKeeper ensemble=127.0.0.1:51551 2023-07-11 15:34:06,391 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:439710x0, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:06,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43971-0x10154f761120002 connected 2023-07-11 15:34:06,393 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:06,393 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:06,394 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:34:06,394 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43971 2023-07-11 15:34:06,394 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43971 2023-07-11 15:34:06,394 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43971 2023-07-11 15:34:06,397 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43971 2023-07-11 15:34:06,399 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43971 2023-07-11 15:34:06,401 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:34:06,402 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:34:06,402 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:34:06,402 INFO [Listener at localhost/36297] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:34:06,403 INFO [Listener at localhost/36297] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:34:06,403 INFO [Listener at localhost/36297] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:34:06,403 INFO [Listener at localhost/36297] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:34:06,404 INFO [Listener at localhost/36297] http.HttpServer(1146): Jetty bound to port 44087 2023-07-11 15:34:06,404 INFO [Listener at localhost/36297] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:06,405 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,405 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6c0f5bac{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:34:06,406 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,406 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@20863e22{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:34:06,525 INFO [Listener at localhost/36297] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:34:06,527 INFO [Listener at localhost/36297] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:34:06,527 INFO [Listener at localhost/36297] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:34:06,527 INFO [Listener at localhost/36297] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 15:34:06,528 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,529 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6a24c095{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/java.io.tmpdir/jetty-0_0_0_0-44087-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8795277621643533077/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:06,533 INFO [Listener at localhost/36297] server.AbstractConnector(333): Started ServerConnector@13d8a6b4{HTTP/1.1, (http/1.1)}{0.0.0.0:44087} 2023-07-11 15:34:06,533 INFO [Listener at localhost/36297] server.Server(415): Started @38027ms 2023-07-11 15:34:06,546 INFO [Listener at localhost/36297] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:34:06,546 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:06,546 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:06,546 INFO [Listener at localhost/36297] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:34:06,546 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:06,546 INFO [Listener at localhost/36297] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:34:06,546 INFO [Listener at localhost/36297] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:34:06,547 INFO [Listener at localhost/36297] ipc.NettyRpcServer(120): Bind to /172.31.2.10:42857 2023-07-11 15:34:06,547 INFO [Listener at localhost/36297] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:34:06,549 DEBUG [Listener at localhost/36297] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:34:06,549 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:06,550 INFO [Listener at localhost/36297] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:06,551 INFO [Listener at localhost/36297] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42857 connecting to ZooKeeper ensemble=127.0.0.1:51551 2023-07-11 15:34:06,555 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:428570x0, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:06,557 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): regionserver:428570x0, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:06,558 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42857-0x10154f761120003 connected 2023-07-11 15:34:06,558 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:06,559 DEBUG [Listener at localhost/36297] zookeeper.ZKUtil(164): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:34:06,559 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42857 2023-07-11 15:34:06,559 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42857 2023-07-11 15:34:06,560 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42857 2023-07-11 15:34:06,560 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42857 2023-07-11 15:34:06,560 DEBUG [Listener at localhost/36297] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42857 2023-07-11 15:34:06,562 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:34:06,562 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:34:06,562 INFO [Listener at localhost/36297] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:34:06,563 INFO [Listener at localhost/36297] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:34:06,563 INFO [Listener at localhost/36297] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:34:06,563 INFO [Listener at localhost/36297] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:34:06,563 INFO [Listener at localhost/36297] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:34:06,564 INFO [Listener at localhost/36297] http.HttpServer(1146): Jetty bound to port 34163 2023-07-11 15:34:06,564 INFO [Listener at localhost/36297] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:06,565 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,565 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@445d8bcc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:34:06,565 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,566 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10a1cb35{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:34:06,686 INFO [Listener at localhost/36297] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:34:06,687 INFO [Listener at localhost/36297] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:34:06,688 INFO [Listener at localhost/36297] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:34:06,688 INFO [Listener at localhost/36297] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:34:06,688 INFO [Listener at localhost/36297] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:06,689 INFO [Listener at localhost/36297] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@121437bd{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/java.io.tmpdir/jetty-0_0_0_0-34163-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5216734429647555224/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:06,691 INFO [Listener at localhost/36297] server.AbstractConnector(333): Started ServerConnector@36aabc41{HTTP/1.1, (http/1.1)}{0.0.0.0:34163} 2023-07-11 15:34:06,691 INFO [Listener at localhost/36297] server.Server(415): Started @38184ms 2023-07-11 15:34:06,693 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:06,697 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@e7d1f4c{HTTP/1.1, (http/1.1)}{0.0.0.0:36609} 2023-07-11 15:34:06,697 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(415): Started @38191ms 2023-07-11 15:34:06,698 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:06,699 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 15:34:06,699 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:06,701 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:06,701 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:06,701 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:06,701 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:06,702 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 15:34:06,702 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:06,704 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase9.apache.org,38729,1689089645968 from backup master directory 2023-07-11 15:34:06,705 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 15:34:06,706 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:06,706 WARN [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:34:06,706 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 15:34:06,706 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:06,726 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/hbase.id with ID: e56ca4f2-d677-4475-80c9-3b30e49dc707 2023-07-11 15:34:06,739 INFO [master/jenkins-hbase9:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:06,744 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:06,768 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5661ea84 to 127.0.0.1:51551 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:06,772 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44567b98, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:06,772 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:06,773 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-11 15:34:06,773 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:06,775 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/data/master/store-tmp 2023-07-11 15:34:06,786 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:06,786 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 15:34:06,786 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:06,786 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:06,786 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 15:34:06,787 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:06,787 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:06,787 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 15:34:06,787 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/WALs/jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:06,790 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C38729%2C1689089645968, suffix=, logDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/WALs/jenkins-hbase9.apache.org,38729,1689089645968, archiveDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/oldWALs, maxLogs=10 2023-07-11 15:34:06,806 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK] 2023-07-11 15:34:06,806 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK] 2023-07-11 15:34:06,806 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK] 2023-07-11 15:34:06,809 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/WALs/jenkins-hbase9.apache.org,38729,1689089645968/jenkins-hbase9.apache.org%2C38729%2C1689089645968.1689089646790 2023-07-11 15:34:06,809 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK], DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK], DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK]] 2023-07-11 15:34:06,809 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:06,809 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:06,809 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:06,809 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:06,811 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:06,813 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-11 15:34:06,813 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-11 15:34:06,814 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:06,815 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:06,815 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:06,818 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:06,820 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:06,820 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10499543520, jitterRate=-0.022153809666633606}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:06,820 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 15:34:06,821 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-11 15:34:06,822 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-11 15:34:06,822 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-11 15:34:06,822 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-11 15:34:06,822 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-11 15:34:06,823 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-11 15:34:06,823 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-11 15:34:06,824 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-11 15:34:06,825 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-11 15:34:06,826 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-11 15:34:06,826 INFO [master/jenkins-hbase9:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-11 15:34:06,826 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-11 15:34:06,828 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:06,828 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-11 15:34:06,829 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-11 15:34:06,830 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-11 15:34:06,831 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:06,831 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:06,831 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase9.apache.org,38729,1689089645968, sessionid=0x10154f761120000, setting cluster-up flag (Was=false) 2023-07-11 15:34:06,833 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:06,834 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:06,834 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:06,842 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-11 15:34:06,842 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:06,845 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:06,850 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-11 15:34:06,851 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:06,851 WARN [master/jenkins-hbase9:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.hbase-snapshot/.tmp 2023-07-11 15:34:06,853 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-11 15:34:06,853 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-11 15:34:06,854 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-11 15:34:06,855 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:34:06,855 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-11 15:34:06,855 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-11 15:34:06,856 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-11 15:34:06,869 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 15:34:06,869 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 15:34:06,870 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 15:34:06,870 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 15:34:06,870 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:34:06,870 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:34:06,870 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:34:06,870 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:34:06,870 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase9:0, corePoolSize=10, maxPoolSize=10 2023-07-11 15:34:06,870 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:06,870 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:34:06,870 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:06,884 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689089676884 2023-07-11 15:34:06,884 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-11 15:34:06,885 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-11 15:34:06,885 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-11 15:34:06,885 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-11 15:34:06,885 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-11 15:34:06,885 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-11 15:34:06,893 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:06,899 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-11 15:34:06,899 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-11 15:34:06,899 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-11 15:34:06,911 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-11 15:34:06,912 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-11 15:34:06,921 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089646912,5,FailOnTimeoutGroup] 2023-07-11 15:34:06,924 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(951): ClusterId : e56ca4f2-d677-4475-80c9-3b30e49dc707 2023-07-11 15:34:06,923 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 15:34:06,932 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-11 15:34:06,934 DEBUG [RS:0;jenkins-hbase9:40917] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:34:06,935 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:06,937 DEBUG [RS:0;jenkins-hbase9:40917] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:34:06,937 DEBUG [RS:0;jenkins-hbase9:40917] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:34:06,924 INFO [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(951): ClusterId : e56ca4f2-d677-4475-80c9-3b30e49dc707 2023-07-11 15:34:06,928 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089646926,5,FailOnTimeoutGroup] 2023-07-11 15:34:06,927 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(951): ClusterId : e56ca4f2-d677-4475-80c9-3b30e49dc707 2023-07-11 15:34:06,940 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:06,940 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-11 15:34:06,940 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:06,940 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:06,939 DEBUG [RS:1;jenkins-hbase9:43971] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:34:06,942 DEBUG [RS:2;jenkins-hbase9:42857] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:34:06,949 DEBUG [RS:0;jenkins-hbase9:40917] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:34:06,950 DEBUG [RS:1;jenkins-hbase9:43971] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:34:06,950 DEBUG [RS:1;jenkins-hbase9:43971] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:34:06,950 DEBUG [RS:2;jenkins-hbase9:42857] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:34:06,950 DEBUG [RS:2;jenkins-hbase9:42857] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:34:06,952 DEBUG [RS:1;jenkins-hbase9:43971] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:34:06,954 DEBUG [RS:2;jenkins-hbase9:42857] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:34:06,959 DEBUG [RS:0;jenkins-hbase9:40917] zookeeper.ReadOnlyZKClient(139): Connect 0x4e73f1e8 to 127.0.0.1:51551 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:06,959 DEBUG [RS:1;jenkins-hbase9:43971] zookeeper.ReadOnlyZKClient(139): Connect 0x6e62dd7f to 127.0.0.1:51551 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:06,960 DEBUG [RS:2;jenkins-hbase9:42857] zookeeper.ReadOnlyZKClient(139): Connect 0x76a5d00e to 127.0.0.1:51551 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:06,987 DEBUG [RS:0;jenkins-hbase9:40917] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@40b40656, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:06,987 DEBUG [RS:2;jenkins-hbase9:42857] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@786230aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:06,987 DEBUG [RS:1;jenkins-hbase9:43971] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@649ab60f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:06,987 DEBUG [RS:2;jenkins-hbase9:42857] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c5dbb7e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:06,988 DEBUG [RS:0;jenkins-hbase9:40917] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1890c6d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:06,988 DEBUG [RS:1;jenkins-hbase9:43971] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@219834f8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:06,995 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:06,996 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:06,996 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026 2023-07-11 15:34:07,003 DEBUG [RS:1;jenkins-hbase9:43971] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase9:43971 2023-07-11 15:34:07,003 INFO [RS:1;jenkins-hbase9:43971] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:34:07,003 INFO [RS:1;jenkins-hbase9:43971] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:34:07,003 DEBUG [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:34:07,003 DEBUG [RS:0;jenkins-hbase9:40917] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase9:40917 2023-07-11 15:34:07,004 INFO [RS:0;jenkins-hbase9:40917] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:34:07,004 INFO [RS:0;jenkins-hbase9:40917] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:34:07,004 DEBUG [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:34:07,004 INFO [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38729,1689089645968 with isa=jenkins-hbase9.apache.org/172.31.2.10:43971, startcode=1689089646380 2023-07-11 15:34:07,004 DEBUG [RS:1;jenkins-hbase9:43971] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:34:07,004 DEBUG [RS:2;jenkins-hbase9:42857] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase9:42857 2023-07-11 15:34:07,006 INFO [RS:2;jenkins-hbase9:42857] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:34:07,006 INFO [RS:2;jenkins-hbase9:42857] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:34:07,006 DEBUG [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:34:07,007 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38729,1689089645968 with isa=jenkins-hbase9.apache.org/172.31.2.10:40917, startcode=1689089646187 2023-07-11 15:34:07,007 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,38729,1689089645968 with isa=jenkins-hbase9.apache.org/172.31.2.10:42857, startcode=1689089646545 2023-07-11 15:34:07,008 DEBUG [RS:0;jenkins-hbase9:40917] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:34:07,008 DEBUG [RS:2;jenkins-hbase9:42857] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:34:07,014 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:33585, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:34:07,014 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:40469, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:34:07,016 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47119, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:34:07,019 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38729] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:07,019 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:34:07,020 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-11 15:34:07,020 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38729] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:07,020 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:34:07,020 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-11 15:34:07,020 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38729] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,020 DEBUG [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026 2023-07-11 15:34:07,020 DEBUG [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026 2023-07-11 15:34:07,020 DEBUG [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46437 2023-07-11 15:34:07,021 DEBUG [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34203 2023-07-11 15:34:07,020 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:34:07,021 DEBUG [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026 2023-07-11 15:34:07,021 DEBUG [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46437 2023-07-11 15:34:07,021 DEBUG [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34203 2023-07-11 15:34:07,020 DEBUG [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46437 2023-07-11 15:34:07,021 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-11 15:34:07,021 DEBUG [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34203 2023-07-11 15:34:07,022 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:07,032 DEBUG [RS:1;jenkins-hbase9:43971] zookeeper.ZKUtil(162): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:07,032 WARN [RS:1;jenkins-hbase9:43971] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:34:07,032 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,40917,1689089646187] 2023-07-11 15:34:07,032 INFO [RS:1;jenkins-hbase9:43971] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:07,032 DEBUG [RS:2;jenkins-hbase9:42857] zookeeper.ZKUtil(162): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:07,032 DEBUG [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:07,032 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,42857,1689089646545] 2023-07-11 15:34:07,032 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,43971,1689089646380] 2023-07-11 15:34:07,032 WARN [RS:2;jenkins-hbase9:42857] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:34:07,032 DEBUG [RS:0;jenkins-hbase9:40917] zookeeper.ZKUtil(162): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,032 INFO [RS:2;jenkins-hbase9:42857] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:07,032 WARN [RS:0;jenkins-hbase9:40917] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:34:07,033 DEBUG [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:07,033 INFO [RS:0;jenkins-hbase9:40917] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:07,033 DEBUG [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,042 DEBUG [RS:1;jenkins-hbase9:43971] zookeeper.ZKUtil(162): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:07,042 DEBUG [RS:2;jenkins-hbase9:42857] zookeeper.ZKUtil(162): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:07,042 DEBUG [RS:1;jenkins-hbase9:43971] zookeeper.ZKUtil(162): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,043 DEBUG [RS:1;jenkins-hbase9:43971] zookeeper.ZKUtil(162): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:07,043 DEBUG [RS:0;jenkins-hbase9:40917] zookeeper.ZKUtil(162): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:07,043 DEBUG [RS:2;jenkins-hbase9:42857] zookeeper.ZKUtil(162): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,044 DEBUG [RS:2;jenkins-hbase9:42857] zookeeper.ZKUtil(162): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:07,044 DEBUG [RS:0;jenkins-hbase9:40917] zookeeper.ZKUtil(162): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,045 DEBUG [RS:0;jenkins-hbase9:40917] zookeeper.ZKUtil(162): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:07,045 DEBUG [RS:1;jenkins-hbase9:43971] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:34:07,045 INFO [RS:1;jenkins-hbase9:43971] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:34:07,045 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:07,046 DEBUG [RS:2;jenkins-hbase9:42857] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:34:07,046 DEBUG [RS:0;jenkins-hbase9:40917] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:34:07,047 INFO [RS:0;jenkins-hbase9:40917] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:34:07,047 INFO [RS:2;jenkins-hbase9:42857] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:34:07,050 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 15:34:07,054 INFO [RS:0;jenkins-hbase9:40917] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:34:07,055 INFO [RS:2;jenkins-hbase9:42857] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:34:07,055 INFO [RS:1;jenkins-hbase9:43971] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:34:07,055 INFO [RS:2;jenkins-hbase9:42857] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:34:07,055 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,056 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/info 2023-07-11 15:34:07,056 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 15:34:07,057 INFO [RS:0;jenkins-hbase9:40917] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:34:07,057 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,057 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:07,058 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 15:34:07,059 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:34:07,060 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 15:34:07,060 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:07,060 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 15:34:07,062 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:34:07,062 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/table 2023-07-11 15:34:07,062 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 15:34:07,063 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:07,069 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:34:07,070 INFO [RS:1;jenkins-hbase9:43971] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:34:07,070 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,081 INFO [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:34:07,083 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,083 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,084 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,084 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,084 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,084 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,084 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:34:07,084 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,084 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,084 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,084 DEBUG [RS:0;jenkins-hbase9:40917] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,084 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740 2023-07-11 15:34:07,085 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740 2023-07-11 15:34:07,087 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 15:34:07,088 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 15:34:07,089 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,090 DEBUG [RS:1;jenkins-hbase9:43971] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,093 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,098 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,100 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,106 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,106 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,113 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,114 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,114 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:07,114 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,114 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,115 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10764317120, jitterRate=0.0025051534175872803}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 15:34:07,115 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 15:34:07,115 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 15:34:07,115 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 15:34:07,115 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 15:34:07,115 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 15:34:07,115 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,126 DEBUG [RS:2;jenkins-hbase9:42857] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:07,139 INFO [RS:0;jenkins-hbase9:40917] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:34:07,140 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,40917,1689089646187-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,145 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,145 INFO [RS:1;jenkins-hbase9:43971] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:34:07,145 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,145 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,43971,1689089646380-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,145 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,146 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,158 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 15:34:07,158 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 15:34:07,159 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 15:34:07,159 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-11 15:34:07,159 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-11 15:34:07,164 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-11 15:34:07,165 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-11 15:34:07,175 INFO [RS:2;jenkins-hbase9:42857] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:34:07,175 INFO [RS:1;jenkins-hbase9:43971] regionserver.Replication(203): jenkins-hbase9.apache.org,43971,1689089646380 started 2023-07-11 15:34:07,175 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,42857,1689089646545-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,175 INFO [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,43971,1689089646380, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:43971, sessionid=0x10154f761120002 2023-07-11 15:34:07,175 DEBUG [RS:1;jenkins-hbase9:43971] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:34:07,175 DEBUG [RS:1;jenkins-hbase9:43971] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:07,175 DEBUG [RS:1;jenkins-hbase9:43971] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,43971,1689089646380' 2023-07-11 15:34:07,175 DEBUG [RS:1;jenkins-hbase9:43971] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:34:07,176 DEBUG [RS:1;jenkins-hbase9:43971] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:34:07,176 DEBUG [RS:1;jenkins-hbase9:43971] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:34:07,176 DEBUG [RS:1;jenkins-hbase9:43971] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:34:07,176 DEBUG [RS:1;jenkins-hbase9:43971] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:07,176 DEBUG [RS:1;jenkins-hbase9:43971] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,43971,1689089646380' 2023-07-11 15:34:07,176 DEBUG [RS:1;jenkins-hbase9:43971] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:34:07,176 DEBUG [RS:1;jenkins-hbase9:43971] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:34:07,177 DEBUG [RS:1;jenkins-hbase9:43971] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:34:07,177 INFO [RS:1;jenkins-hbase9:43971] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-11 15:34:07,179 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,180 DEBUG [RS:1;jenkins-hbase9:43971] zookeeper.ZKUtil(398): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-11 15:34:07,180 INFO [RS:1;jenkins-hbase9:43971] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-11 15:34:07,181 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,181 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,185 INFO [RS:0;jenkins-hbase9:40917] regionserver.Replication(203): jenkins-hbase9.apache.org,40917,1689089646187 started 2023-07-11 15:34:07,186 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,40917,1689089646187, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:40917, sessionid=0x10154f761120001 2023-07-11 15:34:07,188 DEBUG [RS:0;jenkins-hbase9:40917] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:34:07,188 DEBUG [RS:0;jenkins-hbase9:40917] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,188 DEBUG [RS:0;jenkins-hbase9:40917] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,40917,1689089646187' 2023-07-11 15:34:07,188 DEBUG [RS:0;jenkins-hbase9:40917] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:34:07,189 DEBUG [RS:0;jenkins-hbase9:40917] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:34:07,189 DEBUG [RS:0;jenkins-hbase9:40917] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:34:07,189 DEBUG [RS:0;jenkins-hbase9:40917] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:34:07,189 DEBUG [RS:0;jenkins-hbase9:40917] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,189 DEBUG [RS:0;jenkins-hbase9:40917] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,40917,1689089646187' 2023-07-11 15:34:07,189 DEBUG [RS:0;jenkins-hbase9:40917] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:34:07,190 DEBUG [RS:0;jenkins-hbase9:40917] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:34:07,190 DEBUG [RS:0;jenkins-hbase9:40917] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:34:07,190 INFO [RS:0;jenkins-hbase9:40917] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-11 15:34:07,190 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,191 DEBUG [RS:0;jenkins-hbase9:40917] zookeeper.ZKUtil(398): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-11 15:34:07,191 INFO [RS:0;jenkins-hbase9:40917] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-11 15:34:07,191 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,191 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,193 INFO [RS:2;jenkins-hbase9:42857] regionserver.Replication(203): jenkins-hbase9.apache.org,42857,1689089646545 started 2023-07-11 15:34:07,194 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,42857,1689089646545, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:42857, sessionid=0x10154f761120003 2023-07-11 15:34:07,194 DEBUG [RS:2;jenkins-hbase9:42857] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:34:07,194 DEBUG [RS:2;jenkins-hbase9:42857] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:07,194 DEBUG [RS:2;jenkins-hbase9:42857] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,42857,1689089646545' 2023-07-11 15:34:07,194 DEBUG [RS:2;jenkins-hbase9:42857] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:34:07,194 DEBUG [RS:2;jenkins-hbase9:42857] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:34:07,195 DEBUG [RS:2;jenkins-hbase9:42857] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:34:07,195 DEBUG [RS:2;jenkins-hbase9:42857] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:34:07,195 DEBUG [RS:2;jenkins-hbase9:42857] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:07,195 DEBUG [RS:2;jenkins-hbase9:42857] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,42857,1689089646545' 2023-07-11 15:34:07,195 DEBUG [RS:2;jenkins-hbase9:42857] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:34:07,195 DEBUG [RS:2;jenkins-hbase9:42857] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:34:07,195 DEBUG [RS:2;jenkins-hbase9:42857] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:34:07,196 INFO [RS:2;jenkins-hbase9:42857] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-11 15:34:07,196 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,196 DEBUG [RS:2;jenkins-hbase9:42857] zookeeper.ZKUtil(398): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-11 15:34:07,196 INFO [RS:2;jenkins-hbase9:42857] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-11 15:34:07,196 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,196 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,285 INFO [RS:1;jenkins-hbase9:43971] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C43971%2C1689089646380, suffix=, logDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,43971,1689089646380, archiveDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/oldWALs, maxLogs=32 2023-07-11 15:34:07,355 DEBUG [jenkins-hbase9:38729] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-11 15:34:07,355 DEBUG [jenkins-hbase9:38729] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:07,356 DEBUG [jenkins-hbase9:38729] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:07,356 DEBUG [jenkins-hbase9:38729] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:07,356 DEBUG [jenkins-hbase9:38729] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:34:07,357 DEBUG [jenkins-hbase9:38729] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:07,358 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,40917,1689089646187, state=OPENING 2023-07-11 15:34:07,359 INFO [RS:0;jenkins-hbase9:40917] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C40917%2C1689089646187, suffix=, logDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,40917,1689089646187, archiveDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/oldWALs, maxLogs=32 2023-07-11 15:34:07,360 DEBUG [PEWorker-5] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-11 15:34:07,360 INFO [RS:2;jenkins-hbase9:42857] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C42857%2C1689089646545, suffix=, logDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,42857,1689089646545, archiveDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/oldWALs, maxLogs=32 2023-07-11 15:34:07,372 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:07,373 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 15:34:07,375 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,40917,1689089646187}] 2023-07-11 15:34:07,382 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK] 2023-07-11 15:34:07,382 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK] 2023-07-11 15:34:07,382 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK] 2023-07-11 15:34:07,394 INFO [RS:1;jenkins-hbase9:43971] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,43971,1689089646380/jenkins-hbase9.apache.org%2C43971%2C1689089646380.1689089647354 2023-07-11 15:34:07,395 DEBUG [RS:1;jenkins-hbase9:43971] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK], DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK], DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK]] 2023-07-11 15:34:07,402 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK] 2023-07-11 15:34:07,402 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK] 2023-07-11 15:34:07,402 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK] 2023-07-11 15:34:07,403 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK] 2023-07-11 15:34:07,403 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK] 2023-07-11 15:34:07,403 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK] 2023-07-11 15:34:07,405 INFO [RS:0;jenkins-hbase9:40917] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,40917,1689089646187/jenkins-hbase9.apache.org%2C40917%2C1689089646187.1689089647360 2023-07-11 15:34:07,406 DEBUG [RS:0;jenkins-hbase9:40917] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK], DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK], DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK]] 2023-07-11 15:34:07,409 INFO [RS:2;jenkins-hbase9:42857] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,42857,1689089646545/jenkins-hbase9.apache.org%2C42857%2C1689089646545.1689089647361 2023-07-11 15:34:07,410 DEBUG [RS:2;jenkins-hbase9:42857] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK], DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK], DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK]] 2023-07-11 15:34:07,471 WARN [ReadOnlyZKClient-127.0.0.1:51551@0x5661ea84] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-11 15:34:07,471 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38729,1689089645968] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:34:07,473 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:39756, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:34:07,474 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40917] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:39756 deadline: 1689089707474, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,534 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:07,536 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:34:07,538 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:39758, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:34:07,542 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-11 15:34:07,542 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:07,544 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C40917%2C1689089646187.meta, suffix=.meta, logDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,40917,1689089646187, archiveDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/oldWALs, maxLogs=32 2023-07-11 15:34:07,564 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK] 2023-07-11 15:34:07,566 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK] 2023-07-11 15:34:07,569 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK] 2023-07-11 15:34:07,571 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,40917,1689089646187/jenkins-hbase9.apache.org%2C40917%2C1689089646187.meta.1689089647545.meta 2023-07-11 15:34:07,571 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41079,DS-fbe21cda-db4c-43ad-9859-2acc5e1685c9,DISK], DatanodeInfoWithStorage[127.0.0.1:33947,DS-34f566c8-e6c2-4554-b23c-7314f00a759b,DISK], DatanodeInfoWithStorage[127.0.0.1:36269,DS-bd00025d-dfec-48bb-b9a4-dbf9a2692423,DISK]] 2023-07-11 15:34:07,571 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:07,571 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 15:34:07,572 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-11 15:34:07,572 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-11 15:34:07,572 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-11 15:34:07,572 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:07,572 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-11 15:34:07,572 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-11 15:34:07,573 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 15:34:07,574 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/info 2023-07-11 15:34:07,574 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/info 2023-07-11 15:34:07,575 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 15:34:07,576 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:07,576 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 15:34:07,577 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:34:07,577 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:34:07,577 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 15:34:07,578 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:07,578 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 15:34:07,579 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/table 2023-07-11 15:34:07,579 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/table 2023-07-11 15:34:07,579 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 15:34:07,580 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:07,581 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740 2023-07-11 15:34:07,582 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740 2023-07-11 15:34:07,584 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 15:34:07,586 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 15:34:07,587 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11625789440, jitterRate=0.08273601531982422}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 15:34:07,587 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 15:34:07,587 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689089647534 2023-07-11 15:34:07,592 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-11 15:34:07,593 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-11 15:34:07,594 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,40917,1689089646187, state=OPEN 2023-07-11 15:34:07,595 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 15:34:07,595 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 15:34:07,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-11 15:34:07,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,40917,1689089646187 in 220 msec 2023-07-11 15:34:07,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-11 15:34:07,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 438 msec 2023-07-11 15:34:07,601 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 745 msec 2023-07-11 15:34:07,602 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689089647601, completionTime=-1 2023-07-11 15:34:07,602 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-11 15:34:07,602 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-11 15:34:07,607 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-11 15:34:07,607 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689089707607 2023-07-11 15:34:07,607 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689089767607 2023-07-11 15:34:07,607 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-11 15:34:07,616 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38729,1689089645968-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,616 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38729,1689089645968-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,616 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38729,1689089645968-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,616 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase9:38729, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,616 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:07,616 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-11 15:34:07,616 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:07,618 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-11 15:34:07,619 DEBUG [master/jenkins-hbase9:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-11 15:34:07,620 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:07,621 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:34:07,623 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:07,624 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7 empty. 2023-07-11 15:34:07,624 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:07,624 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-11 15:34:07,658 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:07,669 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4cee1b24173bbc90f5d9f4ff9b9e03e7, NAME => 'hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp 2023-07-11 15:34:07,705 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:07,705 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4cee1b24173bbc90f5d9f4ff9b9e03e7, disabling compactions & flushes 2023-07-11 15:34:07,705 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:07,705 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:07,705 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. after waiting 0 ms 2023-07-11 15:34:07,705 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:07,705 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:07,705 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4cee1b24173bbc90f5d9f4ff9b9e03e7: 2023-07-11 15:34:07,708 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:34:07,709 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089647709"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089647709"}]},"ts":"1689089647709"} 2023-07-11 15:34:07,712 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:34:07,712 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:34:07,713 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089647713"}]},"ts":"1689089647713"} 2023-07-11 15:34:07,714 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-11 15:34:07,718 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:07,718 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:07,718 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:07,718 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:34:07,718 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:07,718 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4cee1b24173bbc90f5d9f4ff9b9e03e7, ASSIGN}] 2023-07-11 15:34:07,720 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4cee1b24173bbc90f5d9f4ff9b9e03e7, ASSIGN 2023-07-11 15:34:07,721 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4cee1b24173bbc90f5d9f4ff9b9e03e7, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42857,1689089646545; forceNewPlan=false, retain=false 2023-07-11 15:34:07,871 INFO [jenkins-hbase9:38729] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:34:07,873 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4cee1b24173bbc90f5d9f4ff9b9e03e7, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:07,873 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089647873"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089647873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089647873"}]},"ts":"1689089647873"} 2023-07-11 15:34:07,875 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 4cee1b24173bbc90f5d9f4ff9b9e03e7, server=jenkins-hbase9.apache.org,42857,1689089646545}] 2023-07-11 15:34:07,977 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38729,1689089645968] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:07,979 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38729,1689089645968] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-11 15:34:07,987 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:07,988 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:34:07,990 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:07,991 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1 empty. 2023-07-11 15:34:07,991 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:07,991 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-11 15:34:08,007 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:08,009 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0b221651b0e21b16466af1eff01843d1, NAME => 'hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp 2023-07-11 15:34:08,028 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:08,028 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:34:08,030 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:35536, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:34:08,034 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:08,034 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 0b221651b0e21b16466af1eff01843d1, disabling compactions & flushes 2023-07-11 15:34:08,034 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:08,034 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:08,034 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. after waiting 0 ms 2023-07-11 15:34:08,034 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:08,034 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:08,034 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 0b221651b0e21b16466af1eff01843d1: 2023-07-11 15:34:08,037 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:34:08,038 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089648038"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089648038"}]},"ts":"1689089648038"} 2023-07-11 15:34:08,038 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:08,039 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4cee1b24173bbc90f5d9f4ff9b9e03e7, NAME => 'hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:08,039 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:08,039 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:08,039 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:08,039 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:08,041 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:34:08,041 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:34:08,042 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089648042"}]},"ts":"1689089648042"} 2023-07-11 15:34:08,042 INFO [StoreOpener-4cee1b24173bbc90f5d9f4ff9b9e03e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:08,043 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-11 15:34:08,043 DEBUG [StoreOpener-4cee1b24173bbc90f5d9f4ff9b9e03e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7/info 2023-07-11 15:34:08,043 DEBUG [StoreOpener-4cee1b24173bbc90f5d9f4ff9b9e03e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7/info 2023-07-11 15:34:08,044 INFO [StoreOpener-4cee1b24173bbc90f5d9f4ff9b9e03e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4cee1b24173bbc90f5d9f4ff9b9e03e7 columnFamilyName info 2023-07-11 15:34:08,045 INFO [StoreOpener-4cee1b24173bbc90f5d9f4ff9b9e03e7-1] regionserver.HStore(310): Store=4cee1b24173bbc90f5d9f4ff9b9e03e7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:08,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:08,046 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:08,046 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:08,046 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:08,047 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:34:08,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:08,047 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:08,047 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0b221651b0e21b16466af1eff01843d1, ASSIGN}] 2023-07-11 15:34:08,048 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0b221651b0e21b16466af1eff01843d1, ASSIGN 2023-07-11 15:34:08,050 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0b221651b0e21b16466af1eff01843d1, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42857,1689089646545; forceNewPlan=false, retain=false 2023-07-11 15:34:08,054 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:08,076 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:08,077 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 4cee1b24173bbc90f5d9f4ff9b9e03e7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11464779200, jitterRate=0.06774076819419861}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:08,077 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 4cee1b24173bbc90f5d9f4ff9b9e03e7: 2023-07-11 15:34:08,079 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7., pid=6, masterSystemTime=1689089648028 2023-07-11 15:34:08,083 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:08,084 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:08,084 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4cee1b24173bbc90f5d9f4ff9b9e03e7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:08,085 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089648084"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089648084"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089648084"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089648084"}]},"ts":"1689089648084"} 2023-07-11 15:34:08,087 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-11 15:34:08,088 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 4cee1b24173bbc90f5d9f4ff9b9e03e7, server=jenkins-hbase9.apache.org,42857,1689089646545 in 211 msec 2023-07-11 15:34:08,090 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-11 15:34:08,090 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4cee1b24173bbc90f5d9f4ff9b9e03e7, ASSIGN in 370 msec 2023-07-11 15:34:08,091 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:34:08,091 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089648091"}]},"ts":"1689089648091"} 2023-07-11 15:34:08,094 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-11 15:34:08,100 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:34:08,101 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 483 msec 2023-07-11 15:34:08,119 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-11 15:34:08,120 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:08,120 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:08,124 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:34:08,125 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:35552, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:34:08,129 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-11 15:34:08,143 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:08,147 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 18 msec 2023-07-11 15:34:08,151 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-11 15:34:08,156 DEBUG [PEWorker-5] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-11 15:34:08,157 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-11 15:34:08,200 INFO [jenkins-hbase9:38729] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:34:08,201 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=0b221651b0e21b16466af1eff01843d1, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:08,202 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089648201"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089648201"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089648201"}]},"ts":"1689089648201"} 2023-07-11 15:34:08,203 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=8, state=RUNNABLE; OpenRegionProcedure 0b221651b0e21b16466af1eff01843d1, server=jenkins-hbase9.apache.org,42857,1689089646545}] 2023-07-11 15:34:08,359 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:08,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0b221651b0e21b16466af1eff01843d1, NAME => 'hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:08,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 15:34:08,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. service=MultiRowMutationService 2023-07-11 15:34:08,360 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-11 15:34:08,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:08,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:08,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:08,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:08,366 INFO [StoreOpener-0b221651b0e21b16466af1eff01843d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:08,367 DEBUG [StoreOpener-0b221651b0e21b16466af1eff01843d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1/m 2023-07-11 15:34:08,367 DEBUG [StoreOpener-0b221651b0e21b16466af1eff01843d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1/m 2023-07-11 15:34:08,368 INFO [StoreOpener-0b221651b0e21b16466af1eff01843d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0b221651b0e21b16466af1eff01843d1 columnFamilyName m 2023-07-11 15:34:08,368 INFO [StoreOpener-0b221651b0e21b16466af1eff01843d1-1] regionserver.HStore(310): Store=0b221651b0e21b16466af1eff01843d1/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:08,369 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:08,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:08,373 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:08,376 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:08,376 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 0b221651b0e21b16466af1eff01843d1; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@b71cf12, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:08,376 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 0b221651b0e21b16466af1eff01843d1: 2023-07-11 15:34:08,377 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1., pid=11, masterSystemTime=1689089648355 2023-07-11 15:34:08,378 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:08,378 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:08,379 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=0b221651b0e21b16466af1eff01843d1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:08,379 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089648379"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089648379"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089648379"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089648379"}]},"ts":"1689089648379"} 2023-07-11 15:34:08,395 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=8 2023-07-11 15:34:08,395 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=8, state=SUCCESS; OpenRegionProcedure 0b221651b0e21b16466af1eff01843d1, server=jenkins-hbase9.apache.org,42857,1689089646545 in 178 msec 2023-07-11 15:34:08,398 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-11 15:34:08,398 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0b221651b0e21b16466af1eff01843d1, ASSIGN in 349 msec 2023-07-11 15:34:08,412 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:08,414 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 263 msec 2023-07-11 15:34:08,415 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:34:08,415 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089648415"}]},"ts":"1689089648415"} 2023-07-11 15:34:08,417 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-11 15:34:08,419 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:34:08,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 442 msec 2023-07-11 15:34:08,443 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-11 15:34:08,445 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-11 15:34:08,445 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.739sec 2023-07-11 15:34:08,448 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-11 15:34:08,449 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:08,450 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-11 15:34:08,450 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-11 15:34:08,452 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:08,453 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:34:08,454 INFO [master/jenkins-hbase9:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-11 15:34:08,456 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,456 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad empty. 2023-07-11 15:34:08,457 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,457 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-11 15:34:08,457 DEBUG [Listener at localhost/36297] zookeeper.ReadOnlyZKClient(139): Connect 0x64896c5f to 127.0.0.1:51551 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:08,459 INFO [master/jenkins-hbase9:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-11 15:34:08,459 INFO [master/jenkins-hbase9:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-11 15:34:08,463 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:08,464 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:08,464 INFO [master/jenkins-hbase9:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-11 15:34:08,464 INFO [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-11 15:34:08,464 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38729,1689089645968-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-11 15:34:08,464 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,38729,1689089645968-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-11 15:34:08,467 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-11 15:34:08,469 DEBUG [Listener at localhost/36297] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18fae515, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:08,474 DEBUG [hconnection-0x66398128-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:34:08,476 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:39762, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:34:08,478 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:08,478 INFO [Listener at localhost/36297] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:08,482 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-11 15:34:08,482 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-11 15:34:08,490 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:08,491 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:08,491 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:08,491 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => f3e7615d4ba8486f7817dc84e45cf9ad, NAME => 'hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp 2023-07-11 15:34:08,493 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 15:34:08,494 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,38729,1689089645968] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-11 15:34:08,506 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:08,507 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing f3e7615d4ba8486f7817dc84e45cf9ad, disabling compactions & flushes 2023-07-11 15:34:08,507 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:08,507 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:08,507 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. after waiting 0 ms 2023-07-11 15:34:08,507 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:08,507 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:08,507 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for f3e7615d4ba8486f7817dc84e45cf9ad: 2023-07-11 15:34:08,510 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:34:08,511 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689089648511"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089648511"}]},"ts":"1689089648511"} 2023-07-11 15:34:08,512 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:34:08,513 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:34:08,513 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089648513"}]},"ts":"1689089648513"} 2023-07-11 15:34:08,514 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-11 15:34:08,518 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:08,518 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:08,518 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:08,518 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:34:08,518 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:08,519 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=f3e7615d4ba8486f7817dc84e45cf9ad, ASSIGN}] 2023-07-11 15:34:08,519 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=f3e7615d4ba8486f7817dc84e45cf9ad, ASSIGN 2023-07-11 15:34:08,520 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=f3e7615d4ba8486f7817dc84e45cf9ad, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42857,1689089646545; forceNewPlan=false, retain=false 2023-07-11 15:34:08,581 DEBUG [Listener at localhost/36297] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-11 15:34:08,583 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:46164, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-11 15:34:08,586 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-11 15:34:08,586 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:08,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=false 2023-07-11 15:34:08,587 DEBUG [Listener at localhost/36297] zookeeper.ReadOnlyZKClient(139): Connect 0x35fa0ee6 to 127.0.0.1:51551 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:08,591 DEBUG [Listener at localhost/36297] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fe0e9b0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:08,592 INFO [Listener at localhost/36297] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51551 2023-07-11 15:34:08,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.HMaster$15(3014): Client=jenkins//172.31.2.10 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-11 15:34:08,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-11 15:34:08,600 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:08,601 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10154f76112000a connected 2023-07-11 15:34:08,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-11 15:34:08,610 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:08,613 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-11 15:34:08,670 INFO [jenkins-hbase9:38729] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:34:08,672 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f3e7615d4ba8486f7817dc84e45cf9ad, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:08,672 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689089648672"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089648672"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089648672"}]},"ts":"1689089648672"} 2023-07-11 15:34:08,673 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure f3e7615d4ba8486f7817dc84e45cf9ad, server=jenkins-hbase9.apache.org,42857,1689089646545}] 2023-07-11 15:34:08,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-11 15:34:08,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:08,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-11 15:34:08,715 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:08,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-11 15:34:08,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 15:34:08,717 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:08,717 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 15:34:08,720 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:34:08,722 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/np1/table1/6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:08,722 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/np1/table1/6b164966ea5878d0884e097c2193fcb0 empty. 2023-07-11 15:34:08,723 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/np1/table1/6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:08,723 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-11 15:34:08,728 WARN [IPC Server handler 4 on default port 46437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-07-11 15:34:08,728 WARN [IPC Server handler 4 on default port 46437] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-07-11 15:34:08,728 WARN [IPC Server handler 4 on default port 46437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-07-11 15:34:08,738 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:08,739 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6b164966ea5878d0884e097c2193fcb0, NAME => 'np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp 2023-07-11 15:34:08,749 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:08,749 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 6b164966ea5878d0884e097c2193fcb0, disabling compactions & flushes 2023-07-11 15:34:08,750 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:08,750 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:08,750 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. after waiting 0 ms 2023-07-11 15:34:08,750 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:08,750 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:08,750 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 6b164966ea5878d0884e097c2193fcb0: 2023-07-11 15:34:08,752 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:34:08,753 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089648753"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089648753"}]},"ts":"1689089648753"} 2023-07-11 15:34:08,754 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:34:08,755 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:34:08,755 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089648755"}]},"ts":"1689089648755"} 2023-07-11 15:34:08,756 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-11 15:34:08,759 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:08,759 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:08,759 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:08,759 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:34:08,759 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:08,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=6b164966ea5878d0884e097c2193fcb0, ASSIGN}] 2023-07-11 15:34:08,760 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=6b164966ea5878d0884e097c2193fcb0, ASSIGN 2023-07-11 15:34:08,761 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=6b164966ea5878d0884e097c2193fcb0, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,42857,1689089646545; forceNewPlan=false, retain=false 2023-07-11 15:34:08,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 15:34:08,828 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:08,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f3e7615d4ba8486f7817dc84e45cf9ad, NAME => 'hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:08,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:08,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,830 INFO [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,831 DEBUG [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad/q 2023-07-11 15:34:08,831 DEBUG [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad/q 2023-07-11 15:34:08,831 INFO [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f3e7615d4ba8486f7817dc84e45cf9ad columnFamilyName q 2023-07-11 15:34:08,832 INFO [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] regionserver.HStore(310): Store=f3e7615d4ba8486f7817dc84e45cf9ad/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:08,832 INFO [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,833 DEBUG [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad/u 2023-07-11 15:34:08,833 DEBUG [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad/u 2023-07-11 15:34:08,833 INFO [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f3e7615d4ba8486f7817dc84e45cf9ad columnFamilyName u 2023-07-11 15:34:08,834 INFO [StoreOpener-f3e7615d4ba8486f7817dc84e45cf9ad-1] regionserver.HStore(310): Store=f3e7615d4ba8486f7817dc84e45cf9ad/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:08,835 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,835 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-11 15:34:08,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:08,840 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:08,840 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened f3e7615d4ba8486f7817dc84e45cf9ad; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11860121120, jitterRate=0.10455985367298126}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-11 15:34:08,840 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for f3e7615d4ba8486f7817dc84e45cf9ad: 2023-07-11 15:34:08,841 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad., pid=15, masterSystemTime=1689089648825 2023-07-11 15:34:08,842 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:08,842 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:08,843 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f3e7615d4ba8486f7817dc84e45cf9ad, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:08,843 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689089648843"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089648843"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089648843"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089648843"}]},"ts":"1689089648843"} 2023-07-11 15:34:08,846 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-11 15:34:08,846 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure f3e7615d4ba8486f7817dc84e45cf9ad, server=jenkins-hbase9.apache.org,42857,1689089646545 in 171 msec 2023-07-11 15:34:08,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-11 15:34:08,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=f3e7615d4ba8486f7817dc84e45cf9ad, ASSIGN in 327 msec 2023-07-11 15:34:08,847 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:34:08,848 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089648848"}]},"ts":"1689089648848"} 2023-07-11 15:34:08,849 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-11 15:34:08,851 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:34:08,852 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 402 msec 2023-07-11 15:34:08,911 INFO [jenkins-hbase9:38729] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:34:08,912 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6b164966ea5878d0884e097c2193fcb0, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:08,912 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089648912"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089648912"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089648912"}]},"ts":"1689089648912"} 2023-07-11 15:34:08,914 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 6b164966ea5878d0884e097c2193fcb0, server=jenkins-hbase9.apache.org,42857,1689089646545}] 2023-07-11 15:34:09,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 15:34:09,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:09,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6b164966ea5878d0884e097c2193fcb0, NAME => 'np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:09,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:09,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,071 INFO [StoreOpener-6b164966ea5878d0884e097c2193fcb0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,073 DEBUG [StoreOpener-6b164966ea5878d0884e097c2193fcb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/np1/table1/6b164966ea5878d0884e097c2193fcb0/fam1 2023-07-11 15:34:09,073 DEBUG [StoreOpener-6b164966ea5878d0884e097c2193fcb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/np1/table1/6b164966ea5878d0884e097c2193fcb0/fam1 2023-07-11 15:34:09,073 INFO [StoreOpener-6b164966ea5878d0884e097c2193fcb0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6b164966ea5878d0884e097c2193fcb0 columnFamilyName fam1 2023-07-11 15:34:09,074 INFO [StoreOpener-6b164966ea5878d0884e097c2193fcb0-1] regionserver.HStore(310): Store=6b164966ea5878d0884e097c2193fcb0/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:09,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/np1/table1/6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/np1/table1/6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/np1/table1/6b164966ea5878d0884e097c2193fcb0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:09,079 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 6b164966ea5878d0884e097c2193fcb0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12075362240, jitterRate=0.12460574507713318}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:09,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 6b164966ea5878d0884e097c2193fcb0: 2023-07-11 15:34:09,080 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0., pid=18, masterSystemTime=1689089649065 2023-07-11 15:34:09,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:09,082 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:09,082 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6b164966ea5878d0884e097c2193fcb0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:09,082 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089649082"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089649082"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089649082"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089649082"}]},"ts":"1689089649082"} 2023-07-11 15:34:09,086 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-11 15:34:09,086 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 6b164966ea5878d0884e097c2193fcb0, server=jenkins-hbase9.apache.org,42857,1689089646545 in 170 msec 2023-07-11 15:34:09,091 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-11 15:34:09,091 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=6b164966ea5878d0884e097c2193fcb0, ASSIGN in 327 msec 2023-07-11 15:34:09,092 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:34:09,092 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089649092"}]},"ts":"1689089649092"} 2023-07-11 15:34:09,093 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-11 15:34:09,096 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:34:09,098 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 384 msec 2023-07-11 15:34:09,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 15:34:09,319 INFO [Listener at localhost/36297] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-11 15:34:09,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:09,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-11 15:34:09,323 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:09,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-11 15:34:09,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-11 15:34:09,341 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=20 msec 2023-07-11 15:34:09,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-11 15:34:09,428 INFO [Listener at localhost/36297] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-11 15:34:09,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:09,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:09,430 INFO [Listener at localhost/36297] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-11 15:34:09,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable np1:table1 2023-07-11 15:34:09,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-11 15:34:09,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 15:34:09,433 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089649433"}]},"ts":"1689089649433"} 2023-07-11 15:34:09,434 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-11 15:34:09,436 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-11 15:34:09,439 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=6b164966ea5878d0884e097c2193fcb0, UNASSIGN}] 2023-07-11 15:34:09,440 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=6b164966ea5878d0884e097c2193fcb0, UNASSIGN 2023-07-11 15:34:09,440 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=6b164966ea5878d0884e097c2193fcb0, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:09,441 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089649440"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089649440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089649440"}]},"ts":"1689089649440"} 2023-07-11 15:34:09,442 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 6b164966ea5878d0884e097c2193fcb0, server=jenkins-hbase9.apache.org,42857,1689089646545}] 2023-07-11 15:34:09,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 15:34:09,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 6b164966ea5878d0884e097c2193fcb0, disabling compactions & flushes 2023-07-11 15:34:09,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:09,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:09,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. after waiting 0 ms 2023-07-11 15:34:09,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:09,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/np1/table1/6b164966ea5878d0884e097c2193fcb0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:34:09,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0. 2023-07-11 15:34:09,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 6b164966ea5878d0884e097c2193fcb0: 2023-07-11 15:34:09,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,602 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=6b164966ea5878d0884e097c2193fcb0, regionState=CLOSED 2023-07-11 15:34:09,602 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089649602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089649602"}]},"ts":"1689089649602"} 2023-07-11 15:34:09,604 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-11 15:34:09,604 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 6b164966ea5878d0884e097c2193fcb0, server=jenkins-hbase9.apache.org,42857,1689089646545 in 161 msec 2023-07-11 15:34:09,606 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-11 15:34:09,606 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=6b164966ea5878d0884e097c2193fcb0, UNASSIGN in 165 msec 2023-07-11 15:34:09,606 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089649606"}]},"ts":"1689089649606"} 2023-07-11 15:34:09,607 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-11 15:34:09,609 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-11 15:34:09,610 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 179 msec 2023-07-11 15:34:09,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 15:34:09,735 INFO [Listener at localhost/36297] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-11 15:34:09,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete np1:table1 2023-07-11 15:34:09,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-11 15:34:09,738 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 15:34:09,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-11 15:34:09,739 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 15:34:09,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:09,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 15:34:09,744 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/np1/table1/6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,746 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/np1/table1/6b164966ea5878d0884e097c2193fcb0/fam1, FileablePath, hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/np1/table1/6b164966ea5878d0884e097c2193fcb0/recovered.edits] 2023-07-11 15:34:09,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-11 15:34:09,753 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/np1/table1/6b164966ea5878d0884e097c2193fcb0/recovered.edits/4.seqid to hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/archive/data/np1/table1/6b164966ea5878d0884e097c2193fcb0/recovered.edits/4.seqid 2023-07-11 15:34:09,754 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/.tmp/data/np1/table1/6b164966ea5878d0884e097c2193fcb0 2023-07-11 15:34:09,754 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-11 15:34:09,760 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 15:34:09,764 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-11 15:34:09,768 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-11 15:34:09,774 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 15:34:09,774 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-11 15:34:09,774 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089649774"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:09,776 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 15:34:09,776 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6b164966ea5878d0884e097c2193fcb0, NAME => 'np1:table1,,1689089648711.6b164966ea5878d0884e097c2193fcb0.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 15:34:09,776 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-11 15:34:09,776 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689089649776"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:09,778 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-11 15:34:09,782 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 15:34:09,783 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 47 msec 2023-07-11 15:34:09,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-11 15:34:09,851 INFO [Listener at localhost/36297] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-11 15:34:09,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.HMaster$17(3086): Client=jenkins//172.31.2.10 delete np1 2023-07-11 15:34:09,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-11 15:34:09,864 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 15:34:09,867 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 15:34:09,869 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 15:34:09,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-11 15:34:09,870 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-11 15:34:09,870 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:09,871 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 15:34:09,872 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 15:34:09,874 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-11 15:34:09,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38729] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-11 15:34:09,971 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-11 15:34:09,971 INFO [Listener at localhost/36297] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-11 15:34:09,971 DEBUG [Listener at localhost/36297] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x64896c5f to 127.0.0.1:51551 2023-07-11 15:34:09,971 DEBUG [Listener at localhost/36297] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:09,971 DEBUG [Listener at localhost/36297] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-11 15:34:09,971 DEBUG [Listener at localhost/36297] util.JVMClusterUtil(257): Found active master hash=1955215869, stopped=false 2023-07-11 15:34:09,971 DEBUG [Listener at localhost/36297] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 15:34:09,971 DEBUG [Listener at localhost/36297] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 15:34:09,972 DEBUG [Listener at localhost/36297] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-11 15:34:09,972 INFO [Listener at localhost/36297] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:09,973 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:09,973 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:09,973 INFO [Listener at localhost/36297] procedure2.ProcedureExecutor(629): Stopping 2023-07-11 15:34:09,973 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:09,973 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:09,974 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:09,975 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:09,975 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:09,975 DEBUG [Listener at localhost/36297] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5661ea84 to 127.0.0.1:51551 2023-07-11 15:34:09,975 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:09,975 DEBUG [Listener at localhost/36297] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:09,975 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:09,975 INFO [Listener at localhost/36297] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,40917,1689089646187' ***** 2023-07-11 15:34:09,975 INFO [Listener at localhost/36297] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:09,975 INFO [Listener at localhost/36297] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,43971,1689089646380' ***** 2023-07-11 15:34:09,975 INFO [Listener at localhost/36297] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:09,976 INFO [Listener at localhost/36297] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,42857,1689089646545' ***** 2023-07-11 15:34:09,976 INFO [Listener at localhost/36297] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:09,976 INFO [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:09,976 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:09,976 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:09,986 INFO [RS:1;jenkins-hbase9:43971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6a24c095{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:09,986 INFO [RS:2;jenkins-hbase9:42857] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@121437bd{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:09,987 INFO [RS:0;jenkins-hbase9:40917] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1c33904b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:09,987 INFO [RS:0;jenkins-hbase9:40917] server.AbstractConnector(383): Stopped ServerConnector@5c60221f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:09,987 INFO [RS:2;jenkins-hbase9:42857] server.AbstractConnector(383): Stopped ServerConnector@36aabc41{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:09,987 INFO [RS:1;jenkins-hbase9:43971] server.AbstractConnector(383): Stopped ServerConnector@13d8a6b4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:09,987 INFO [RS:2;jenkins-hbase9:42857] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:09,987 INFO [RS:0;jenkins-hbase9:40917] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:09,987 INFO [RS:1;jenkins-hbase9:43971] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:09,991 INFO [RS:0;jenkins-hbase9:40917] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7acb83b4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:09,991 INFO [RS:1;jenkins-hbase9:43971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@20863e22{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:09,991 INFO [RS:2;jenkins-hbase9:42857] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10a1cb35{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:09,992 INFO [RS:1;jenkins-hbase9:43971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6c0f5bac{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:09,992 INFO [RS:2;jenkins-hbase9:42857] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@445d8bcc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:09,992 INFO [RS:0;jenkins-hbase9:40917] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39385d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:09,993 INFO [RS:0;jenkins-hbase9:40917] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:09,993 INFO [RS:1;jenkins-hbase9:43971] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:09,993 INFO [RS:2;jenkins-hbase9:42857] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:09,993 INFO [RS:1;jenkins-hbase9:43971] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:09,993 INFO [RS:2;jenkins-hbase9:42857] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:09,993 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:09,993 INFO [RS:2;jenkins-hbase9:42857] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:09,993 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:09,993 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(3305): Received CLOSE for 0b221651b0e21b16466af1eff01843d1 2023-07-11 15:34:09,993 INFO [RS:1;jenkins-hbase9:43971] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:09,993 INFO [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:09,993 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:09,995 DEBUG [RS:1;jenkins-hbase9:43971] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6e62dd7f to 127.0.0.1:51551 2023-07-11 15:34:09,993 INFO [RS:0;jenkins-hbase9:40917] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:09,996 DEBUG [RS:1;jenkins-hbase9:43971] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:09,996 INFO [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,43971,1689089646380; all regions closed. 2023-07-11 15:34:09,996 DEBUG [RS:1;jenkins-hbase9:43971] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-11 15:34:09,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 0b221651b0e21b16466af1eff01843d1, disabling compactions & flushes 2023-07-11 15:34:09,993 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(3305): Received CLOSE for f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:09,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:09,996 INFO [RS:0;jenkins-hbase9:40917] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:09,997 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:09,997 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(3305): Received CLOSE for 4cee1b24173bbc90f5d9f4ff9b9e03e7 2023-07-11 15:34:09,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:09,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. after waiting 0 ms 2023-07-11 15:34:09,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:09,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 0b221651b0e21b16466af1eff01843d1 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-11 15:34:09,997 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:09,997 DEBUG [RS:0;jenkins-hbase9:40917] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4e73f1e8 to 127.0.0.1:51551 2023-07-11 15:34:09,998 DEBUG [RS:2;jenkins-hbase9:42857] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x76a5d00e to 127.0.0.1:51551 2023-07-11 15:34:09,998 DEBUG [RS:2;jenkins-hbase9:42857] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:09,998 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-11 15:34:09,998 DEBUG [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1478): Online Regions={0b221651b0e21b16466af1eff01843d1=hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1., f3e7615d4ba8486f7817dc84e45cf9ad=hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad., 4cee1b24173bbc90f5d9f4ff9b9e03e7=hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7.} 2023-07-11 15:34:09,998 DEBUG [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1504): Waiting on 0b221651b0e21b16466af1eff01843d1, 4cee1b24173bbc90f5d9f4ff9b9e03e7, f3e7615d4ba8486f7817dc84e45cf9ad 2023-07-11 15:34:09,998 DEBUG [RS:0;jenkins-hbase9:40917] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:09,998 INFO [RS:0;jenkins-hbase9:40917] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:09,998 INFO [RS:0;jenkins-hbase9:40917] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:09,998 INFO [RS:0;jenkins-hbase9:40917] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:09,998 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-11 15:34:10,002 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-11 15:34:10,002 DEBUG [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-11 15:34:10,002 DEBUG [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-11 15:34:10,004 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 15:34:10,005 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 15:34:10,005 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 15:34:10,005 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 15:34:10,005 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 15:34:10,005 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-11 15:34:10,033 DEBUG [RS:1;jenkins-hbase9:43971] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/oldWALs 2023-07-11 15:34:10,033 INFO [RS:1;jenkins-hbase9:43971] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C43971%2C1689089646380:(num 1689089647354) 2023-07-11 15:34:10,033 DEBUG [RS:1;jenkins-hbase9:43971] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:10,033 INFO [RS:1;jenkins-hbase9:43971] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:10,041 INFO [RS:1;jenkins-hbase9:43971] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:10,041 INFO [RS:1;jenkins-hbase9:43971] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:10,041 INFO [RS:1;jenkins-hbase9:43971] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:10,041 INFO [RS:1;jenkins-hbase9:43971] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:10,041 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:10,043 INFO [RS:1;jenkins-hbase9:43971] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:43971 2023-07-11 15:34:10,057 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:10,057 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:10,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1/.tmp/m/7490627de21e43f4811f18c63e264091 2023-07-11 15:34:10,066 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:10,071 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/.tmp/info/eb0f02d856e64c109e5c1f3937cdc152 2023-07-11 15:34:10,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1/.tmp/m/7490627de21e43f4811f18c63e264091 as hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1/m/7490627de21e43f4811f18c63e264091 2023-07-11 15:34:10,082 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb0f02d856e64c109e5c1f3937cdc152 2023-07-11 15:34:10,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1/m/7490627de21e43f4811f18c63e264091, entries=1, sequenceid=7, filesize=4.9 K 2023-07-11 15:34:10,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 0b221651b0e21b16466af1eff01843d1 in 92ms, sequenceid=7, compaction requested=false 2023-07-11 15:34:10,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-11 15:34:10,108 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:10,108 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:10,108 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:10,108 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:10,108 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,43971,1689089646380 2023-07-11 15:34:10,108 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:10,108 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:10,114 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,43971,1689089646380] 2023-07-11 15:34:10,114 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,43971,1689089646380; numProcessing=1 2023-07-11 15:34:10,122 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,43971,1689089646380 already deleted, retry=false 2023-07-11 15:34:10,122 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,43971,1689089646380 expired; onlineServers=2 2023-07-11 15:34:10,126 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/.tmp/rep_barrier/0eef7cd6ab7446ec8cb12d155dcec056 2023-07-11 15:34:10,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/rsgroup/0b221651b0e21b16466af1eff01843d1/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-11 15:34:10,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:34:10,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:10,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 0b221651b0e21b16466af1eff01843d1: 2023-07-11 15:34:10,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689089647977.0b221651b0e21b16466af1eff01843d1. 2023-07-11 15:34:10,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing f3e7615d4ba8486f7817dc84e45cf9ad, disabling compactions & flushes 2023-07-11 15:34:10,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:10,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:10,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. after waiting 0 ms 2023-07-11 15:34:10,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:10,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/quota/f3e7615d4ba8486f7817dc84e45cf9ad/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:34:10,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:10,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for f3e7615d4ba8486f7817dc84e45cf9ad: 2023-07-11 15:34:10,132 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0eef7cd6ab7446ec8cb12d155dcec056 2023-07-11 15:34:10,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689089648449.f3e7615d4ba8486f7817dc84e45cf9ad. 2023-07-11 15:34:10,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 4cee1b24173bbc90f5d9f4ff9b9e03e7, disabling compactions & flushes 2023-07-11 15:34:10,132 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:10,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:10,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. after waiting 0 ms 2023-07-11 15:34:10,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:10,132 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 4cee1b24173bbc90f5d9f4ff9b9e03e7 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-11 15:34:10,144 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-11 15:34:10,144 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-11 15:34:10,150 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7/.tmp/info/16261e534e564e90b5385c105bdf43da 2023-07-11 15:34:10,150 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/.tmp/table/ba8b2c7a055247558b93809480556584 2023-07-11 15:34:10,156 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ba8b2c7a055247558b93809480556584 2023-07-11 15:34:10,156 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 16261e534e564e90b5385c105bdf43da 2023-07-11 15:34:10,157 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/.tmp/info/eb0f02d856e64c109e5c1f3937cdc152 as hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/info/eb0f02d856e64c109e5c1f3937cdc152 2023-07-11 15:34:10,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7/.tmp/info/16261e534e564e90b5385c105bdf43da as hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7/info/16261e534e564e90b5385c105bdf43da 2023-07-11 15:34:10,162 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 16261e534e564e90b5385c105bdf43da 2023-07-11 15:34:10,163 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7/info/16261e534e564e90b5385c105bdf43da, entries=3, sequenceid=8, filesize=5.0 K 2023-07-11 15:34:10,163 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 4cee1b24173bbc90f5d9f4ff9b9e03e7 in 31ms, sequenceid=8, compaction requested=false 2023-07-11 15:34:10,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-11 15:34:10,164 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb0f02d856e64c109e5c1f3937cdc152 2023-07-11 15:34:10,164 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/info/eb0f02d856e64c109e5c1f3937cdc152, entries=32, sequenceid=31, filesize=8.5 K 2023-07-11 15:34:10,167 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/.tmp/rep_barrier/0eef7cd6ab7446ec8cb12d155dcec056 as hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/rep_barrier/0eef7cd6ab7446ec8cb12d155dcec056 2023-07-11 15:34:10,169 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/namespace/4cee1b24173bbc90f5d9f4ff9b9e03e7/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-11 15:34:10,170 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:10,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 4cee1b24173bbc90f5d9f4ff9b9e03e7: 2023-07-11 15:34:10,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689089647616.4cee1b24173bbc90f5d9f4ff9b9e03e7. 2023-07-11 15:34:10,172 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0eef7cd6ab7446ec8cb12d155dcec056 2023-07-11 15:34:10,172 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/rep_barrier/0eef7cd6ab7446ec8cb12d155dcec056, entries=1, sequenceid=31, filesize=4.9 K 2023-07-11 15:34:10,173 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/.tmp/table/ba8b2c7a055247558b93809480556584 as hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/table/ba8b2c7a055247558b93809480556584 2023-07-11 15:34:10,178 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-11 15:34:10,178 INFO [regionserver/jenkins-hbase9:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-11 15:34:10,179 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ba8b2c7a055247558b93809480556584 2023-07-11 15:34:10,179 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/table/ba8b2c7a055247558b93809480556584, entries=8, sequenceid=31, filesize=5.2 K 2023-07-11 15:34:10,180 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 175ms, sequenceid=31, compaction requested=false 2023-07-11 15:34:10,180 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-11 15:34:10,188 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-11 15:34:10,189 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:34:10,189 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 15:34:10,189 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 15:34:10,189 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-11 15:34:10,198 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,42857,1689089646545; all regions closed. 2023-07-11 15:34:10,198 DEBUG [RS:2;jenkins-hbase9:42857] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-11 15:34:10,204 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,40917,1689089646187; all regions closed. 2023-07-11 15:34:10,205 DEBUG [RS:0;jenkins-hbase9:40917] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-11 15:34:10,215 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/WALs/jenkins-hbase9.apache.org,40917,1689089646187/jenkins-hbase9.apache.org%2C40917%2C1689089646187.meta.1689089647545.meta not finished, retry = 0 2023-07-11 15:34:10,217 DEBUG [RS:2;jenkins-hbase9:42857] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/oldWALs 2023-07-11 15:34:10,217 INFO [RS:2;jenkins-hbase9:42857] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C42857%2C1689089646545:(num 1689089647361) 2023-07-11 15:34:10,217 DEBUG [RS:2;jenkins-hbase9:42857] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:10,217 INFO [RS:2;jenkins-hbase9:42857] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:10,218 INFO [RS:2;jenkins-hbase9:42857] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:10,218 INFO [RS:2;jenkins-hbase9:42857] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:10,218 INFO [RS:2;jenkins-hbase9:42857] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:10,218 INFO [RS:2;jenkins-hbase9:42857] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:10,218 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:10,219 INFO [RS:2;jenkins-hbase9:42857] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:42857 2023-07-11 15:34:10,223 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:10,223 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,42857,1689089646545 2023-07-11 15:34:10,224 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:10,224 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,42857,1689089646545] 2023-07-11 15:34:10,225 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,42857,1689089646545; numProcessing=2 2023-07-11 15:34:10,226 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,42857,1689089646545 already deleted, retry=false 2023-07-11 15:34:10,226 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,42857,1689089646545 expired; onlineServers=1 2023-07-11 15:34:10,318 DEBUG [RS:0;jenkins-hbase9:40917] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/oldWALs 2023-07-11 15:34:10,318 INFO [RS:0;jenkins-hbase9:40917] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C40917%2C1689089646187.meta:.meta(num 1689089647545) 2023-07-11 15:34:10,324 DEBUG [RS:0;jenkins-hbase9:40917] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/oldWALs 2023-07-11 15:34:10,324 INFO [RS:0;jenkins-hbase9:40917] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C40917%2C1689089646187:(num 1689089647360) 2023-07-11 15:34:10,324 DEBUG [RS:0;jenkins-hbase9:40917] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:10,325 INFO [RS:0;jenkins-hbase9:40917] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:10,325 INFO [RS:0;jenkins-hbase9:40917] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:10,325 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:10,326 INFO [RS:0;jenkins-hbase9:40917] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:40917 2023-07-11 15:34:10,329 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,40917,1689089646187 2023-07-11 15:34:10,329 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:10,330 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,40917,1689089646187] 2023-07-11 15:34:10,330 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,40917,1689089646187; numProcessing=3 2023-07-11 15:34:10,332 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,40917,1689089646187 already deleted, retry=false 2023-07-11 15:34:10,332 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,40917,1689089646187 expired; onlineServers=0 2023-07-11 15:34:10,332 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,38729,1689089645968' ***** 2023-07-11 15:34:10,332 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-11 15:34:10,332 DEBUG [M:0;jenkins-hbase9:38729] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@70b5b3b4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:10,332 INFO [M:0;jenkins-hbase9:38729] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:10,334 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:10,334 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:10,334 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:10,334 INFO [M:0;jenkins-hbase9:38729] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@30b9e023{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 15:34:10,335 INFO [M:0;jenkins-hbase9:38729] server.AbstractConnector(383): Stopped ServerConnector@627c5fb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:10,335 INFO [M:0;jenkins-hbase9:38729] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:10,335 INFO [M:0;jenkins-hbase9:38729] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e61a861{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:10,335 INFO [M:0;jenkins-hbase9:38729] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@611aa7c8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:10,335 INFO [M:0;jenkins-hbase9:38729] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,38729,1689089645968 2023-07-11 15:34:10,335 INFO [M:0;jenkins-hbase9:38729] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,38729,1689089645968; all regions closed. 2023-07-11 15:34:10,335 DEBUG [M:0;jenkins-hbase9:38729] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:10,336 INFO [M:0;jenkins-hbase9:38729] master.HMaster(1491): Stopping master jetty server 2023-07-11 15:34:10,336 INFO [M:0;jenkins-hbase9:38729] server.AbstractConnector(383): Stopped ServerConnector@e7d1f4c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:10,336 DEBUG [M:0;jenkins-hbase9:38729] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-11 15:34:10,337 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-11 15:34:10,337 DEBUG [M:0;jenkins-hbase9:38729] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-11 15:34:10,337 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089646926] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089646926,5,FailOnTimeoutGroup] 2023-07-11 15:34:10,337 INFO [M:0;jenkins-hbase9:38729] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-11 15:34:10,337 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089646912] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089646912,5,FailOnTimeoutGroup] 2023-07-11 15:34:10,337 INFO [M:0;jenkins-hbase9:38729] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-11 15:34:10,338 INFO [M:0;jenkins-hbase9:38729] hbase.ChoreService(369): Chore service for: master/jenkins-hbase9:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:10,338 DEBUG [M:0;jenkins-hbase9:38729] master.HMaster(1512): Stopping service threads 2023-07-11 15:34:10,338 INFO [M:0;jenkins-hbase9:38729] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-11 15:34:10,339 ERROR [M:0;jenkins-hbase9:38729] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-11 15:34:10,339 INFO [M:0;jenkins-hbase9:38729] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-11 15:34:10,339 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-11 15:34:10,339 DEBUG [M:0;jenkins-hbase9:38729] zookeeper.ZKUtil(398): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-11 15:34:10,339 WARN [M:0;jenkins-hbase9:38729] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-11 15:34:10,339 INFO [M:0;jenkins-hbase9:38729] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-11 15:34:10,340 INFO [M:0;jenkins-hbase9:38729] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-11 15:34:10,340 DEBUG [M:0;jenkins-hbase9:38729] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 15:34:10,340 INFO [M:0;jenkins-hbase9:38729] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:10,340 DEBUG [M:0;jenkins-hbase9:38729] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:10,340 DEBUG [M:0;jenkins-hbase9:38729] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 15:34:10,340 DEBUG [M:0;jenkins-hbase9:38729] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:10,340 INFO [M:0;jenkins-hbase9:38729] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.99 KB heapSize=109.13 KB 2023-07-11 15:34:10,351 INFO [M:0;jenkins-hbase9:38729] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.99 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7bb010516686453c8145e1e6a028578d 2023-07-11 15:34:10,356 DEBUG [M:0;jenkins-hbase9:38729] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7bb010516686453c8145e1e6a028578d as hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7bb010516686453c8145e1e6a028578d 2023-07-11 15:34:10,360 INFO [M:0;jenkins-hbase9:38729] regionserver.HStore(1080): Added hdfs://localhost:46437/user/jenkins/test-data/510227f0-d589-714a-f304-1b2f507d1026/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7bb010516686453c8145e1e6a028578d, entries=24, sequenceid=194, filesize=12.4 K 2023-07-11 15:34:10,361 INFO [M:0;jenkins-hbase9:38729] regionserver.HRegion(2948): Finished flush of dataSize ~92.99 KB/95219, heapSize ~109.11 KB/111728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=194, compaction requested=false 2023-07-11 15:34:10,364 INFO [M:0;jenkins-hbase9:38729] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:10,364 DEBUG [M:0;jenkins-hbase9:38729] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 15:34:10,367 INFO [M:0;jenkins-hbase9:38729] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-11 15:34:10,367 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:10,367 INFO [M:0;jenkins-hbase9:38729] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:38729 2023-07-11 15:34:10,369 DEBUG [M:0;jenkins-hbase9:38729] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase9.apache.org,38729,1689089645968 already deleted, retry=false 2023-07-11 15:34:10,474 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:10,474 INFO [M:0;jenkins-hbase9:38729] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,38729,1689089645968; zookeeper connection closed. 2023-07-11 15:34:10,474 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): master:38729-0x10154f761120000, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:10,574 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:10,574 INFO [RS:0;jenkins-hbase9:40917] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,40917,1689089646187; zookeeper connection closed. 2023-07-11 15:34:10,575 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:40917-0x10154f761120001, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:10,576 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5e7ac077] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5e7ac077 2023-07-11 15:34:10,675 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:10,675 INFO [RS:2;jenkins-hbase9:42857] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,42857,1689089646545; zookeeper connection closed. 2023-07-11 15:34:10,675 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:42857-0x10154f761120003, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:10,675 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4371a2e2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4371a2e2 2023-07-11 15:34:10,775 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:10,775 INFO [RS:1;jenkins-hbase9:43971] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,43971,1689089646380; zookeeper connection closed. 2023-07-11 15:34:10,775 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): regionserver:43971-0x10154f761120002, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:10,776 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@137beac7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@137beac7 2023-07-11 15:34:10,776 INFO [Listener at localhost/36297] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-11 15:34:10,776 WARN [Listener at localhost/36297] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 15:34:10,780 INFO [Listener at localhost/36297] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:10,886 WARN [BP-837117883-172.31.2.10-1689089645037 heartbeating to localhost/127.0.0.1:46437] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 15:34:10,886 WARN [BP-837117883-172.31.2.10-1689089645037 heartbeating to localhost/127.0.0.1:46437] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-837117883-172.31.2.10-1689089645037 (Datanode Uuid 4a7b664a-1079-445e-b872-80b99f3b7f7f) service to localhost/127.0.0.1:46437 2023-07-11 15:34:10,887 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e/dfs/data/data5/current/BP-837117883-172.31.2.10-1689089645037] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:10,887 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e/dfs/data/data6/current/BP-837117883-172.31.2.10-1689089645037] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:10,889 WARN [Listener at localhost/36297] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 15:34:10,893 INFO [Listener at localhost/36297] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:10,995 WARN [BP-837117883-172.31.2.10-1689089645037 heartbeating to localhost/127.0.0.1:46437] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 15:34:10,995 WARN [BP-837117883-172.31.2.10-1689089645037 heartbeating to localhost/127.0.0.1:46437] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-837117883-172.31.2.10-1689089645037 (Datanode Uuid f8077ecc-8adb-407e-8e12-d868a9abb3a8) service to localhost/127.0.0.1:46437 2023-07-11 15:34:10,996 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e/dfs/data/data3/current/BP-837117883-172.31.2.10-1689089645037] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:10,996 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e/dfs/data/data4/current/BP-837117883-172.31.2.10-1689089645037] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:10,997 WARN [Listener at localhost/36297] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 15:34:11,000 INFO [Listener at localhost/36297] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:11,103 WARN [BP-837117883-172.31.2.10-1689089645037 heartbeating to localhost/127.0.0.1:46437] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 15:34:11,103 WARN [BP-837117883-172.31.2.10-1689089645037 heartbeating to localhost/127.0.0.1:46437] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-837117883-172.31.2.10-1689089645037 (Datanode Uuid f73478c7-6fb7-456d-99d3-0399cb198ee4) service to localhost/127.0.0.1:46437 2023-07-11 15:34:11,104 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e/dfs/data/data1/current/BP-837117883-172.31.2.10-1689089645037] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:11,104 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/cluster_46f6c3f0-41d6-881c-95af-4c2fb644006e/dfs/data/data2/current/BP-837117883-172.31.2.10-1689089645037] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:11,113 INFO [Listener at localhost/36297] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:11,232 INFO [Listener at localhost/36297] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-11 15:34:11,266 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-11 15:34:11,266 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-11 15:34:11,266 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.log.dir so I do NOT create it in target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed 2023-07-11 15:34:11,266 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1583e3a3-faf2-4033-8cde-59a4a799fc81/hadoop.tmp.dir so I do NOT create it in target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed 2023-07-11 15:34:11,266 INFO [Listener at localhost/36297] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba, deleteOnExit=true 2023-07-11 15:34:11,266 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-11 15:34:11,267 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/test.cache.data in system properties and HBase conf 2023-07-11 15:34:11,267 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.tmp.dir in system properties and HBase conf 2023-07-11 15:34:11,267 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir in system properties and HBase conf 2023-07-11 15:34:11,267 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-11 15:34:11,267 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-11 15:34:11,267 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-11 15:34:11,267 DEBUG [Listener at localhost/36297] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-11 15:34:11,267 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-11 15:34:11,267 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-11 15:34:11,267 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/nfs.dump.dir in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-11 15:34:11,268 INFO [Listener at localhost/36297] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-11 15:34:11,273 WARN [Listener at localhost/36297] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 15:34:11,273 WARN [Listener at localhost/36297] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 15:34:11,317 WARN [Listener at localhost/36297] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:34:11,320 INFO [Listener at localhost/36297] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:34:11,324 INFO [Listener at localhost/36297] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir/Jetty_localhost_35045_hdfs____.vcjdqf/webapp 2023-07-11 15:34:11,329 DEBUG [Listener at localhost/36297-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10154f76112000a, quorum=127.0.0.1:51551, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-11 15:34:11,330 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10154f76112000a, quorum=127.0.0.1:51551, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-11 15:34:11,424 INFO [Listener at localhost/36297] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35045 2023-07-11 15:34:11,428 WARN [Listener at localhost/36297] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 15:34:11,428 WARN [Listener at localhost/36297] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 15:34:11,476 WARN [Listener at localhost/44357] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:34:11,490 WARN [Listener at localhost/44357] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 15:34:11,492 WARN [Listener at localhost/44357] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:34:11,493 INFO [Listener at localhost/44357] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:34:11,498 INFO [Listener at localhost/44357] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir/Jetty_localhost_38037_datanode____682nau/webapp 2023-07-11 15:34:11,593 INFO [Listener at localhost/44357] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38037 2023-07-11 15:34:11,600 WARN [Listener at localhost/44153] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:34:11,623 WARN [Listener at localhost/44153] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 15:34:11,625 WARN [Listener at localhost/44153] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:34:11,626 INFO [Listener at localhost/44153] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:34:11,631 INFO [Listener at localhost/44153] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir/Jetty_localhost_44659_datanode____.rjpydb/webapp 2023-07-11 15:34:11,711 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6bb07275afbea26b: Processing first storage report for DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0 from datanode 5af8bd8e-0d89-46b4-bd9e-33fcd8819c87 2023-07-11 15:34:11,711 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6bb07275afbea26b: from storage DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0 node DatanodeRegistration(127.0.0.1:46759, datanodeUuid=5af8bd8e-0d89-46b4-bd9e-33fcd8819c87, infoPort=41683, infoSecurePort=0, ipcPort=44153, storageInfo=lv=-57;cid=testClusterID;nsid=1900195723;c=1689089651275), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:11,711 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6bb07275afbea26b: Processing first storage report for DS-4dcb178a-a7ca-40ba-9797-a325119d5126 from datanode 5af8bd8e-0d89-46b4-bd9e-33fcd8819c87 2023-07-11 15:34:11,711 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6bb07275afbea26b: from storage DS-4dcb178a-a7ca-40ba-9797-a325119d5126 node DatanodeRegistration(127.0.0.1:46759, datanodeUuid=5af8bd8e-0d89-46b4-bd9e-33fcd8819c87, infoPort=41683, infoSecurePort=0, ipcPort=44153, storageInfo=lv=-57;cid=testClusterID;nsid=1900195723;c=1689089651275), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:11,738 INFO [Listener at localhost/44153] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44659 2023-07-11 15:34:11,745 WARN [Listener at localhost/43565] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:34:11,759 WARN [Listener at localhost/43565] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 15:34:11,761 WARN [Listener at localhost/43565] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 15:34:11,763 INFO [Listener at localhost/43565] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 15:34:11,767 INFO [Listener at localhost/43565] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir/Jetty_localhost_35685_datanode____6zgepk/webapp 2023-07-11 15:34:11,845 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaf648d7adce5a0b4: Processing first storage report for DS-bae447f6-c340-465b-8f12-4e0db91ea62e from datanode c056f7ec-ec77-4f1a-b321-8846c4309192 2023-07-11 15:34:11,845 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaf648d7adce5a0b4: from storage DS-bae447f6-c340-465b-8f12-4e0db91ea62e node DatanodeRegistration(127.0.0.1:44981, datanodeUuid=c056f7ec-ec77-4f1a-b321-8846c4309192, infoPort=46337, infoSecurePort=0, ipcPort=43565, storageInfo=lv=-57;cid=testClusterID;nsid=1900195723;c=1689089651275), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:11,845 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaf648d7adce5a0b4: Processing first storage report for DS-c2fad07d-3546-416a-ae6a-61971da3952a from datanode c056f7ec-ec77-4f1a-b321-8846c4309192 2023-07-11 15:34:11,845 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaf648d7adce5a0b4: from storage DS-c2fad07d-3546-416a-ae6a-61971da3952a node DatanodeRegistration(127.0.0.1:44981, datanodeUuid=c056f7ec-ec77-4f1a-b321-8846c4309192, infoPort=46337, infoSecurePort=0, ipcPort=43565, storageInfo=lv=-57;cid=testClusterID;nsid=1900195723;c=1689089651275), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:11,876 INFO [Listener at localhost/43565] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35685 2023-07-11 15:34:11,884 WARN [Listener at localhost/36775] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 15:34:11,993 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfaca6f913f202853: Processing first storage report for DS-f2c8cf9b-30e5-48c7-9df6-495f25435771 from datanode 076ab4ec-f86c-4c6c-b01c-45a21cb89d72 2023-07-11 15:34:11,994 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfaca6f913f202853: from storage DS-f2c8cf9b-30e5-48c7-9df6-495f25435771 node DatanodeRegistration(127.0.0.1:41069, datanodeUuid=076ab4ec-f86c-4c6c-b01c-45a21cb89d72, infoPort=42677, infoSecurePort=0, ipcPort=36775, storageInfo=lv=-57;cid=testClusterID;nsid=1900195723;c=1689089651275), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:11,994 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfaca6f913f202853: Processing first storage report for DS-1578ba06-40b2-42aa-ad2a-c68c46778923 from datanode 076ab4ec-f86c-4c6c-b01c-45a21cb89d72 2023-07-11 15:34:11,994 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfaca6f913f202853: from storage DS-1578ba06-40b2-42aa-ad2a-c68c46778923 node DatanodeRegistration(127.0.0.1:41069, datanodeUuid=076ab4ec-f86c-4c6c-b01c-45a21cb89d72, infoPort=42677, infoSecurePort=0, ipcPort=36775, storageInfo=lv=-57;cid=testClusterID;nsid=1900195723;c=1689089651275), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 15:34:11,996 DEBUG [Listener at localhost/36775] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed 2023-07-11 15:34:12,002 INFO [Listener at localhost/36775] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/zookeeper_0, clientPort=64295, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-11 15:34:12,003 INFO [Listener at localhost/36775] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64295 2023-07-11 15:34:12,003 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,004 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,019 INFO [Listener at localhost/36775] util.FSUtils(471): Created version file at hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74 with version=8 2023-07-11 15:34:12,020 INFO [Listener at localhost/36775] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43853/user/jenkins/test-data/05b9588d-af80-989f-a418-fab87f836b8c/hbase-staging 2023-07-11 15:34:12,020 DEBUG [Listener at localhost/36775] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-11 15:34:12,021 DEBUG [Listener at localhost/36775] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-11 15:34:12,021 DEBUG [Listener at localhost/36775] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-11 15:34:12,021 DEBUG [Listener at localhost/36775] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-11 15:34:12,021 INFO [Listener at localhost/36775] client.ConnectionUtils(127): master/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:34:12,022 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,022 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,022 INFO [Listener at localhost/36775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:34:12,022 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,022 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:34:12,022 INFO [Listener at localhost/36775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:34:12,023 INFO [Listener at localhost/36775] ipc.NettyRpcServer(120): Bind to /172.31.2.10:37033 2023-07-11 15:34:12,023 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,024 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,025 INFO [Listener at localhost/36775] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37033 connecting to ZooKeeper ensemble=127.0.0.1:64295 2023-07-11 15:34:12,033 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:370330x0, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:12,034 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37033-0x10154f778bf0000 connected 2023-07-11 15:34:12,050 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:12,051 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:12,051 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:34:12,051 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37033 2023-07-11 15:34:12,052 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37033 2023-07-11 15:34:12,052 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37033 2023-07-11 15:34:12,052 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37033 2023-07-11 15:34:12,052 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37033 2023-07-11 15:34:12,054 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:34:12,055 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:34:12,055 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:34:12,055 INFO [Listener at localhost/36775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-11 15:34:12,055 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:34:12,056 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:34:12,056 INFO [Listener at localhost/36775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:34:12,057 INFO [Listener at localhost/36775] http.HttpServer(1146): Jetty bound to port 33289 2023-07-11 15:34:12,057 INFO [Listener at localhost/36775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:12,068 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,069 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@24dba74e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:34:12,069 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,069 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ec943da{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:34:12,195 INFO [Listener at localhost/36775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:34:12,196 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:34:12,197 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:34:12,197 INFO [Listener at localhost/36775] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 15:34:12,198 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,199 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@419003a3{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir/jetty-0_0_0_0-33289-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6987355089206301904/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 15:34:12,200 INFO [Listener at localhost/36775] server.AbstractConnector(333): Started ServerConnector@26366166{HTTP/1.1, (http/1.1)}{0.0.0.0:33289} 2023-07-11 15:34:12,200 INFO [Listener at localhost/36775] server.Server(415): Started @43694ms 2023-07-11 15:34:12,200 INFO [Listener at localhost/36775] master.HMaster(444): hbase.rootdir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74, hbase.cluster.distributed=false 2023-07-11 15:34:12,214 INFO [Listener at localhost/36775] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:34:12,214 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,214 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,214 INFO [Listener at localhost/36775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:34:12,214 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,214 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:34:12,215 INFO [Listener at localhost/36775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:34:12,216 INFO [Listener at localhost/36775] ipc.NettyRpcServer(120): Bind to /172.31.2.10:41645 2023-07-11 15:34:12,217 INFO [Listener at localhost/36775] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:34:12,218 DEBUG [Listener at localhost/36775] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:34:12,218 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,219 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,220 INFO [Listener at localhost/36775] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41645 connecting to ZooKeeper ensemble=127.0.0.1:64295 2023-07-11 15:34:12,223 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:416450x0, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:12,224 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41645-0x10154f778bf0001 connected 2023-07-11 15:34:12,224 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:12,224 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:12,225 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:34:12,225 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41645 2023-07-11 15:34:12,226 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41645 2023-07-11 15:34:12,226 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41645 2023-07-11 15:34:12,226 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41645 2023-07-11 15:34:12,226 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41645 2023-07-11 15:34:12,228 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:34:12,228 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:34:12,228 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:34:12,228 INFO [Listener at localhost/36775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:34:12,228 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:34:12,228 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:34:12,229 INFO [Listener at localhost/36775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:34:12,229 INFO [Listener at localhost/36775] http.HttpServer(1146): Jetty bound to port 35391 2023-07-11 15:34:12,229 INFO [Listener at localhost/36775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:12,230 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,230 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@226cb0e2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:34:12,231 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,231 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7db5db50{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:34:12,343 INFO [Listener at localhost/36775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:34:12,344 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:34:12,344 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:34:12,345 INFO [Listener at localhost/36775] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:34:12,345 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,346 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@10fa6db4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir/jetty-0_0_0_0-35391-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4188410659870877160/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:12,347 INFO [Listener at localhost/36775] server.AbstractConnector(333): Started ServerConnector@2a106a9f{HTTP/1.1, (http/1.1)}{0.0.0.0:35391} 2023-07-11 15:34:12,347 INFO [Listener at localhost/36775] server.Server(415): Started @43841ms 2023-07-11 15:34:12,359 INFO [Listener at localhost/36775] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:34:12,359 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,359 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,359 INFO [Listener at localhost/36775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:34:12,359 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,359 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:34:12,359 INFO [Listener at localhost/36775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:34:12,360 INFO [Listener at localhost/36775] ipc.NettyRpcServer(120): Bind to /172.31.2.10:35817 2023-07-11 15:34:12,360 INFO [Listener at localhost/36775] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:34:12,361 DEBUG [Listener at localhost/36775] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:34:12,362 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,362 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,363 INFO [Listener at localhost/36775] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35817 connecting to ZooKeeper ensemble=127.0.0.1:64295 2023-07-11 15:34:12,366 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:358170x0, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:12,367 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:358170x0, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:12,368 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35817-0x10154f778bf0002 connected 2023-07-11 15:34:12,368 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:12,369 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:34:12,369 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35817 2023-07-11 15:34:12,369 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35817 2023-07-11 15:34:12,369 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35817 2023-07-11 15:34:12,370 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35817 2023-07-11 15:34:12,370 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35817 2023-07-11 15:34:12,372 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:34:12,372 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:34:12,372 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:34:12,372 INFO [Listener at localhost/36775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:34:12,372 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:34:12,372 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:34:12,372 INFO [Listener at localhost/36775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:34:12,373 INFO [Listener at localhost/36775] http.HttpServer(1146): Jetty bound to port 38863 2023-07-11 15:34:12,373 INFO [Listener at localhost/36775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:12,374 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,374 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7198b9b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:34:12,374 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,375 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2f123dc1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:34:12,495 INFO [Listener at localhost/36775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:34:12,495 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:34:12,496 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:34:12,496 INFO [Listener at localhost/36775] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:34:12,497 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,499 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6d1c7201{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir/jetty-0_0_0_0-38863-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6448382993941772684/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:12,501 INFO [Listener at localhost/36775] server.AbstractConnector(333): Started ServerConnector@3b512c06{HTTP/1.1, (http/1.1)}{0.0.0.0:38863} 2023-07-11 15:34:12,501 INFO [Listener at localhost/36775] server.Server(415): Started @43995ms 2023-07-11 15:34:12,519 INFO [Listener at localhost/36775] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:34:12,519 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,519 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,520 INFO [Listener at localhost/36775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:34:12,520 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:12,520 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:34:12,520 INFO [Listener at localhost/36775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:34:12,526 INFO [Listener at localhost/36775] ipc.NettyRpcServer(120): Bind to /172.31.2.10:32969 2023-07-11 15:34:12,526 INFO [Listener at localhost/36775] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:34:12,532 DEBUG [Listener at localhost/36775] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:34:12,532 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,534 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,535 INFO [Listener at localhost/36775] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32969 connecting to ZooKeeper ensemble=127.0.0.1:64295 2023-07-11 15:34:12,542 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:329690x0, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:12,543 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:329690x0, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:12,544 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32969-0x10154f778bf0003 connected 2023-07-11 15:34:12,544 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:12,544 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:34:12,545 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32969 2023-07-11 15:34:12,549 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32969 2023-07-11 15:34:12,557 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32969 2023-07-11 15:34:12,559 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32969 2023-07-11 15:34:12,559 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32969 2023-07-11 15:34:12,561 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:34:12,561 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:34:12,561 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:34:12,561 INFO [Listener at localhost/36775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:34:12,561 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:34:12,561 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:34:12,562 INFO [Listener at localhost/36775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:34:12,562 INFO [Listener at localhost/36775] http.HttpServer(1146): Jetty bound to port 33549 2023-07-11 15:34:12,562 INFO [Listener at localhost/36775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:12,565 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,565 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7745cde6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:34:12,565 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,566 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@742966bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:34:12,686 INFO [Listener at localhost/36775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:34:12,687 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:34:12,688 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:34:12,688 INFO [Listener at localhost/36775] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:34:12,689 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:12,690 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3b91e364{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir/jetty-0_0_0_0-33549-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5485837776139561436/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:12,692 INFO [Listener at localhost/36775] server.AbstractConnector(333): Started ServerConnector@26dd9db0{HTTP/1.1, (http/1.1)}{0.0.0.0:33549} 2023-07-11 15:34:12,693 INFO [Listener at localhost/36775] server.Server(415): Started @44186ms 2023-07-11 15:34:12,700 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:12,724 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@68c6f068{HTTP/1.1, (http/1.1)}{0.0.0.0:43861} 2023-07-11 15:34:12,725 INFO [master/jenkins-hbase9:0:becomeActiveMaster] server.Server(415): Started @44218ms 2023-07-11 15:34:12,725 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:12,727 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 15:34:12,728 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:12,730 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:12,730 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:12,730 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:12,730 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:12,730 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:12,735 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 15:34:12,735 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 15:34:12,735 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase9.apache.org,37033,1689089652021 from backup master directory 2023-07-11 15:34:12,736 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:12,736 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 15:34:12,736 WARN [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:34:12,736 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:12,754 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/hbase.id with ID: d070db4e-44d0-4159-ade3-3168001bc910 2023-07-11 15:34:12,770 INFO [master/jenkins-hbase9:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:12,772 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:12,797 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x29df1fab to 127.0.0.1:64295 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:12,806 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e9665fc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:12,806 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:12,807 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-11 15:34:12,807 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:12,808 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/data/master/store-tmp 2023-07-11 15:34:12,820 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:12,820 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 15:34:12,820 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:12,820 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:12,820 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 15:34:12,821 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:12,821 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:12,821 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 15:34:12,821 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/WALs/jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:12,824 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C37033%2C1689089652021, suffix=, logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/WALs/jenkins-hbase9.apache.org,37033,1689089652021, archiveDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/oldWALs, maxLogs=10 2023-07-11 15:34:12,838 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK] 2023-07-11 15:34:12,842 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK] 2023-07-11 15:34:12,842 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK] 2023-07-11 15:34:12,846 INFO [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/WALs/jenkins-hbase9.apache.org,37033,1689089652021/jenkins-hbase9.apache.org%2C37033%2C1689089652021.1689089652824 2023-07-11 15:34:12,846 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK], DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK], DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK]] 2023-07-11 15:34:12,846 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:12,846 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:12,846 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:12,846 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:12,849 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:12,850 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-11 15:34:12,851 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-11 15:34:12,851 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:12,852 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:12,852 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:12,855 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 15:34:12,857 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:12,857 INFO [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10774349600, jitterRate=0.00343950092792511}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:12,857 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 15:34:12,858 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-11 15:34:12,859 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-11 15:34:12,859 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-11 15:34:12,859 INFO [master/jenkins-hbase9:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-11 15:34:12,860 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-11 15:34:12,860 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-11 15:34:12,860 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-11 15:34:12,862 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-11 15:34:12,863 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-11 15:34:12,864 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-11 15:34:12,864 INFO [master/jenkins-hbase9:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-11 15:34:12,864 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-11 15:34:12,867 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:12,868 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-11 15:34:12,868 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-11 15:34:12,869 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-11 15:34:12,870 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:12,870 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:12,870 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:12,870 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:12,871 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:12,871 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase9.apache.org,37033,1689089652021, sessionid=0x10154f778bf0000, setting cluster-up flag (Was=false) 2023-07-11 15:34:12,878 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:12,882 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-11 15:34:12,883 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:12,887 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:12,891 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-11 15:34:12,892 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:12,896 WARN [master/jenkins-hbase9:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.hbase-snapshot/.tmp 2023-07-11 15:34:12,908 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-11 15:34:12,908 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-11 15:34:12,910 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-11 15:34:12,910 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:34:12,910 INFO [master/jenkins-hbase9:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-11 15:34:12,911 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-11 15:34:12,922 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 15:34:12,922 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 15:34:12,922 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 15:34:12,922 INFO [master/jenkins-hbase9:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 15:34:12,922 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:34:12,922 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:34:12,922 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:34:12,922 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=5, maxPoolSize=5 2023-07-11 15:34:12,922 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase9:0, corePoolSize=10, maxPoolSize=10 2023-07-11 15:34:12,922 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:12,922 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:34:12,922 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:12,923 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689089682923 2023-07-11 15:34:12,924 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-11 15:34:12,924 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-11 15:34:12,924 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-11 15:34:12,924 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-11 15:34:12,924 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-11 15:34:12,924 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-11 15:34:12,924 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:12,924 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 15:34:12,924 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-11 15:34:12,925 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-11 15:34:12,925 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-11 15:34:12,925 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-11 15:34:12,925 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-11 15:34:12,925 INFO [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-11 15:34:12,926 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089652925,5,FailOnTimeoutGroup] 2023-07-11 15:34:12,926 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089652926,5,FailOnTimeoutGroup] 2023-07-11 15:34:12,926 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:12,926 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-11 15:34:12,926 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:12,926 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:12,927 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:12,940 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:12,941 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:12,941 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74 2023-07-11 15:34:12,953 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:12,954 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 15:34:12,955 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/info 2023-07-11 15:34:12,955 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 15:34:12,956 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:12,956 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 15:34:12,957 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:34:12,957 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 15:34:12,958 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:12,958 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 15:34:12,959 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/table 2023-07-11 15:34:12,960 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 15:34:12,960 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:12,961 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740 2023-07-11 15:34:12,961 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740 2023-07-11 15:34:12,963 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 15:34:12,964 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 15:34:12,966 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:12,966 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11661266560, jitterRate=0.08604007959365845}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 15:34:12,966 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 15:34:12,966 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 15:34:12,966 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 15:34:12,966 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 15:34:12,966 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 15:34:12,966 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 15:34:12,967 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 15:34:12,967 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 15:34:12,967 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 15:34:12,967 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-11 15:34:12,968 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-11 15:34:12,968 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-11 15:34:12,970 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-11 15:34:12,997 INFO [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(951): ClusterId : d070db4e-44d0-4159-ade3-3168001bc910 2023-07-11 15:34:12,997 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(951): ClusterId : d070db4e-44d0-4159-ade3-3168001bc910 2023-07-11 15:34:12,997 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(951): ClusterId : d070db4e-44d0-4159-ade3-3168001bc910 2023-07-11 15:34:12,999 DEBUG [RS:2;jenkins-hbase9:32969] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:34:13,000 DEBUG [RS:1;jenkins-hbase9:35817] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:34:12,998 DEBUG [RS:0;jenkins-hbase9:41645] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:34:13,003 DEBUG [RS:1;jenkins-hbase9:35817] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:34:13,004 DEBUG [RS:1;jenkins-hbase9:35817] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:34:13,003 DEBUG [RS:2;jenkins-hbase9:32969] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:34:13,004 DEBUG [RS:2;jenkins-hbase9:32969] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:34:13,004 DEBUG [RS:0;jenkins-hbase9:41645] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:34:13,004 DEBUG [RS:0;jenkins-hbase9:41645] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:34:13,006 DEBUG [RS:1;jenkins-hbase9:35817] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:34:13,007 DEBUG [RS:0;jenkins-hbase9:41645] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:34:13,008 DEBUG [RS:1;jenkins-hbase9:35817] zookeeper.ReadOnlyZKClient(139): Connect 0x2f4ffe65 to 127.0.0.1:64295 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:13,011 DEBUG [RS:2;jenkins-hbase9:32969] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:34:13,011 DEBUG [RS:0;jenkins-hbase9:41645] zookeeper.ReadOnlyZKClient(139): Connect 0x72ba8b0e to 127.0.0.1:64295 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:13,014 DEBUG [RS:2;jenkins-hbase9:32969] zookeeper.ReadOnlyZKClient(139): Connect 0x32533615 to 127.0.0.1:64295 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:13,027 DEBUG [RS:1;jenkins-hbase9:35817] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@218805fc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:13,027 DEBUG [RS:1;jenkins-hbase9:35817] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25b79515, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:13,028 DEBUG [RS:0;jenkins-hbase9:41645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c0302a4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:13,028 DEBUG [RS:2;jenkins-hbase9:32969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c94ce2f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:13,028 DEBUG [RS:0;jenkins-hbase9:41645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4302bc38, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:13,028 DEBUG [RS:2;jenkins-hbase9:32969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38ab5d65, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:13,036 DEBUG [RS:1;jenkins-hbase9:35817] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase9:35817 2023-07-11 15:34:13,036 INFO [RS:1;jenkins-hbase9:35817] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:34:13,036 INFO [RS:1;jenkins-hbase9:35817] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:34:13,036 DEBUG [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:34:13,037 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,37033,1689089652021 with isa=jenkins-hbase9.apache.org/172.31.2.10:35817, startcode=1689089652358 2023-07-11 15:34:13,037 DEBUG [RS:1;jenkins-hbase9:35817] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:34:13,038 DEBUG [RS:0;jenkins-hbase9:41645] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase9:41645 2023-07-11 15:34:13,038 INFO [RS:0;jenkins-hbase9:41645] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:34:13,038 INFO [RS:0;jenkins-hbase9:41645] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:34:13,038 DEBUG [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:34:13,038 INFO [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,37033,1689089652021 with isa=jenkins-hbase9.apache.org/172.31.2.10:41645, startcode=1689089652214 2023-07-11 15:34:13,038 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:47229, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:34:13,039 DEBUG [RS:0;jenkins-hbase9:41645] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:34:13,040 DEBUG [RS:2;jenkins-hbase9:32969] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase9:32969 2023-07-11 15:34:13,040 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37033] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,040 INFO [RS:2;jenkins-hbase9:32969] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:34:13,040 INFO [RS:2;jenkins-hbase9:32969] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:34:13,040 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:34:13,040 DEBUG [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:34:13,041 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-11 15:34:13,041 DEBUG [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74 2023-07-11 15:34:13,041 DEBUG [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44357 2023-07-11 15:34:13,041 DEBUG [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33289 2023-07-11 15:34:13,041 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,37033,1689089652021 with isa=jenkins-hbase9.apache.org/172.31.2.10:32969, startcode=1689089652518 2023-07-11 15:34:13,041 DEBUG [RS:2;jenkins-hbase9:32969] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:34:13,042 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:33431, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:34:13,042 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37033] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:13,042 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:34:13,042 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-11 15:34:13,042 DEBUG [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74 2023-07-11 15:34:13,042 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:13,042 DEBUG [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44357 2023-07-11 15:34:13,042 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:41203, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:34:13,042 DEBUG [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33289 2023-07-11 15:34:13,043 DEBUG [RS:1;jenkins-hbase9:35817] zookeeper.ZKUtil(162): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,043 WARN [RS:1;jenkins-hbase9:35817] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:34:13,044 INFO [RS:1;jenkins-hbase9:35817] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:13,044 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,35817,1689089652358] 2023-07-11 15:34:13,043 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37033] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,044 DEBUG [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,044 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:34:13,044 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-11 15:34:13,044 DEBUG [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74 2023-07-11 15:34:13,044 DEBUG [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44357 2023-07-11 15:34:13,044 DEBUG [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33289 2023-07-11 15:34:13,048 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:13,049 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,32969,1689089652518] 2023-07-11 15:34:13,049 DEBUG [RS:0;jenkins-hbase9:41645] zookeeper.ZKUtil(162): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:13,049 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,41645,1689089652214] 2023-07-11 15:34:13,049 WARN [RS:0;jenkins-hbase9:41645] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:34:13,049 INFO [RS:0;jenkins-hbase9:41645] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:13,049 DEBUG [RS:2;jenkins-hbase9:32969] zookeeper.ZKUtil(162): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,049 DEBUG [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:13,049 WARN [RS:2;jenkins-hbase9:32969] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:34:13,049 INFO [RS:2;jenkins-hbase9:32969] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:13,050 DEBUG [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,050 DEBUG [RS:1;jenkins-hbase9:35817] zookeeper.ZKUtil(162): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,051 DEBUG [RS:1;jenkins-hbase9:35817] zookeeper.ZKUtil(162): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,055 DEBUG [RS:1;jenkins-hbase9:35817] zookeeper.ZKUtil(162): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:13,058 DEBUG [RS:1;jenkins-hbase9:35817] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:34:13,058 INFO [RS:1;jenkins-hbase9:35817] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:34:13,058 DEBUG [RS:0;jenkins-hbase9:41645] zookeeper.ZKUtil(162): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,058 DEBUG [RS:2;jenkins-hbase9:32969] zookeeper.ZKUtil(162): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,058 DEBUG [RS:0;jenkins-hbase9:41645] zookeeper.ZKUtil(162): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,059 DEBUG [RS:2;jenkins-hbase9:32969] zookeeper.ZKUtil(162): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,059 DEBUG [RS:0;jenkins-hbase9:41645] zookeeper.ZKUtil(162): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:13,059 DEBUG [RS:2;jenkins-hbase9:32969] zookeeper.ZKUtil(162): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:13,059 DEBUG [RS:2;jenkins-hbase9:32969] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:34:13,059 DEBUG [RS:0;jenkins-hbase9:41645] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:34:13,060 INFO [RS:2;jenkins-hbase9:32969] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:34:13,060 INFO [RS:0;jenkins-hbase9:41645] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:34:13,061 INFO [RS:1;jenkins-hbase9:35817] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:34:13,062 INFO [RS:2;jenkins-hbase9:32969] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:34:13,062 INFO [RS:0;jenkins-hbase9:41645] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:34:13,064 INFO [RS:1;jenkins-hbase9:35817] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:34:13,064 INFO [RS:1;jenkins-hbase9:35817] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,066 INFO [RS:0;jenkins-hbase9:41645] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:34:13,066 INFO [RS:2;jenkins-hbase9:32969] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:34:13,066 INFO [RS:2;jenkins-hbase9:32969] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,066 INFO [RS:0;jenkins-hbase9:41645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,066 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:34:13,066 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:34:13,067 INFO [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:34:13,069 INFO [RS:2;jenkins-hbase9:32969] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,069 INFO [RS:0;jenkins-hbase9:41645] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,070 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,070 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,070 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,070 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,070 INFO [RS:1;jenkins-hbase9:35817] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,070 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,070 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:34:13,071 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:34:13,072 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,072 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,071 DEBUG [RS:0;jenkins-hbase9:41645] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,072 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:34:13,072 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,072 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,072 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,072 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,072 DEBUG [RS:2;jenkins-hbase9:32969] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,072 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,072 DEBUG [RS:1;jenkins-hbase9:35817] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:13,074 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-11 15:34:13,081 INFO [RS:1;jenkins-hbase9:35817] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,084 INFO [RS:1;jenkins-hbase9:35817] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,083 INFO [RS:0;jenkins-hbase9:41645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,084 INFO [RS:1;jenkins-hbase9:35817] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,084 INFO [RS:2;jenkins-hbase9:32969] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,085 INFO [RS:0;jenkins-hbase9:41645] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,085 INFO [RS:2;jenkins-hbase9:32969] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,085 INFO [RS:0;jenkins-hbase9:41645] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,086 INFO [RS:2;jenkins-hbase9:32969] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,114 INFO [RS:1;jenkins-hbase9:35817] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:34:13,114 INFO [RS:1;jenkins-hbase9:35817] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,35817,1689089652358-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,115 INFO [RS:0;jenkins-hbase9:41645] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:34:13,115 INFO [RS:0;jenkins-hbase9:41645] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,41645,1689089652214-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,116 INFO [RS:2;jenkins-hbase9:32969] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:34:13,116 INFO [RS:2;jenkins-hbase9:32969] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,32969,1689089652518-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,120 DEBUG [jenkins-hbase9:37033] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-11 15:34:13,120 DEBUG [jenkins-hbase9:37033] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:13,120 DEBUG [jenkins-hbase9:37033] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:13,120 DEBUG [jenkins-hbase9:37033] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:13,120 DEBUG [jenkins-hbase9:37033] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:34:13,120 DEBUG [jenkins-hbase9:37033] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:13,121 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,35817,1689089652358, state=OPENING 2023-07-11 15:34:13,123 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-11 15:34:13,124 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:13,124 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 15:34:13,124 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,35817,1689089652358}] 2023-07-11 15:34:13,132 INFO [RS:0;jenkins-hbase9:41645] regionserver.Replication(203): jenkins-hbase9.apache.org,41645,1689089652214 started 2023-07-11 15:34:13,134 INFO [RS:1;jenkins-hbase9:35817] regionserver.Replication(203): jenkins-hbase9.apache.org,35817,1689089652358 started 2023-07-11 15:34:13,134 INFO [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,41645,1689089652214, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:41645, sessionid=0x10154f778bf0001 2023-07-11 15:34:13,134 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,35817,1689089652358, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:35817, sessionid=0x10154f778bf0002 2023-07-11 15:34:13,138 DEBUG [RS:1;jenkins-hbase9:35817] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:34:13,138 INFO [RS:2;jenkins-hbase9:32969] regionserver.Replication(203): jenkins-hbase9.apache.org,32969,1689089652518 started 2023-07-11 15:34:13,138 DEBUG [RS:0;jenkins-hbase9:41645] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:34:13,138 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,32969,1689089652518, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:32969, sessionid=0x10154f778bf0003 2023-07-11 15:34:13,138 DEBUG [RS:1;jenkins-hbase9:35817] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,138 DEBUG [RS:2;jenkins-hbase9:32969] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:34:13,138 DEBUG [RS:2;jenkins-hbase9:32969] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,138 DEBUG [RS:2;jenkins-hbase9:32969] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,32969,1689089652518' 2023-07-11 15:34:13,138 DEBUG [RS:2;jenkins-hbase9:32969] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:34:13,138 DEBUG [RS:0;jenkins-hbase9:41645] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:13,139 DEBUG [RS:0;jenkins-hbase9:41645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,41645,1689089652214' 2023-07-11 15:34:13,139 DEBUG [RS:0;jenkins-hbase9:41645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:34:13,138 DEBUG [RS:1;jenkins-hbase9:35817] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,35817,1689089652358' 2023-07-11 15:34:13,139 DEBUG [RS:1;jenkins-hbase9:35817] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:34:13,139 DEBUG [RS:2;jenkins-hbase9:32969] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:34:13,141 DEBUG [RS:2;jenkins-hbase9:32969] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:34:13,141 DEBUG [RS:1;jenkins-hbase9:35817] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:34:13,141 DEBUG [RS:2;jenkins-hbase9:32969] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:34:13,141 DEBUG [RS:0;jenkins-hbase9:41645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:34:13,141 DEBUG [RS:2;jenkins-hbase9:32969] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,141 DEBUG [RS:2;jenkins-hbase9:32969] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,32969,1689089652518' 2023-07-11 15:34:13,142 DEBUG [RS:2;jenkins-hbase9:32969] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:34:13,142 DEBUG [RS:2;jenkins-hbase9:32969] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:34:13,142 DEBUG [RS:1;jenkins-hbase9:35817] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:34:13,142 DEBUG [RS:1;jenkins-hbase9:35817] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:34:13,142 DEBUG [RS:1;jenkins-hbase9:35817] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,143 DEBUG [RS:1;jenkins-hbase9:35817] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,35817,1689089652358' 2023-07-11 15:34:13,143 DEBUG [RS:1;jenkins-hbase9:35817] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:34:13,143 DEBUG [RS:0;jenkins-hbase9:41645] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:34:13,143 DEBUG [RS:0;jenkins-hbase9:41645] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:34:13,143 DEBUG [RS:0;jenkins-hbase9:41645] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:13,143 DEBUG [RS:2;jenkins-hbase9:32969] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:34:13,143 DEBUG [RS:0;jenkins-hbase9:41645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,41645,1689089652214' 2023-07-11 15:34:13,143 DEBUG [RS:0;jenkins-hbase9:41645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:34:13,143 INFO [RS:2;jenkins-hbase9:32969] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 15:34:13,144 INFO [RS:2;jenkins-hbase9:32969] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 15:34:13,143 DEBUG [RS:1;jenkins-hbase9:35817] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:34:13,144 DEBUG [RS:0;jenkins-hbase9:41645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:34:13,145 DEBUG [RS:0;jenkins-hbase9:41645] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:34:13,145 INFO [RS:0;jenkins-hbase9:41645] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 15:34:13,145 INFO [RS:0;jenkins-hbase9:41645] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 15:34:13,147 DEBUG [RS:1;jenkins-hbase9:35817] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:34:13,147 INFO [RS:1;jenkins-hbase9:35817] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 15:34:13,147 INFO [RS:1;jenkins-hbase9:35817] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 15:34:13,221 WARN [ReadOnlyZKClient-127.0.0.1:64295@0x29df1fab] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-11 15:34:13,221 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,37033,1689089652021] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:34:13,222 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:46250, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:34:13,223 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35817] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.2.10:46250 deadline: 1689089713223, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,246 INFO [RS:2;jenkins-hbase9:32969] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C32969%2C1689089652518, suffix=, logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,32969,1689089652518, archiveDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs, maxLogs=32 2023-07-11 15:34:13,247 INFO [RS:0;jenkins-hbase9:41645] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C41645%2C1689089652214, suffix=, logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,41645,1689089652214, archiveDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs, maxLogs=32 2023-07-11 15:34:13,248 INFO [RS:1;jenkins-hbase9:35817] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C35817%2C1689089652358, suffix=, logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,35817,1689089652358, archiveDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs, maxLogs=32 2023-07-11 15:34:13,265 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK] 2023-07-11 15:34:13,269 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK] 2023-07-11 15:34:13,269 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK] 2023-07-11 15:34:13,270 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK] 2023-07-11 15:34:13,279 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK] 2023-07-11 15:34:13,279 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK] 2023-07-11 15:34:13,279 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK] 2023-07-11 15:34:13,279 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK] 2023-07-11 15:34:13,280 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK] 2023-07-11 15:34:13,280 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,284 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:34:13,284 INFO [RS:0;jenkins-hbase9:41645] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,41645,1689089652214/jenkins-hbase9.apache.org%2C41645%2C1689089652214.1689089653247 2023-07-11 15:34:13,285 INFO [RS:1;jenkins-hbase9:35817] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,35817,1689089652358/jenkins-hbase9.apache.org%2C35817%2C1689089652358.1689089653248 2023-07-11 15:34:13,288 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:46256, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:34:13,288 DEBUG [RS:1;jenkins-hbase9:35817] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK], DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK], DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK]] 2023-07-11 15:34:13,289 INFO [RS:2;jenkins-hbase9:32969] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,32969,1689089652518/jenkins-hbase9.apache.org%2C32969%2C1689089652518.1689089653246 2023-07-11 15:34:13,289 DEBUG [RS:0;jenkins-hbase9:41645] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK], DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK], DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK]] 2023-07-11 15:34:13,290 DEBUG [RS:2;jenkins-hbase9:32969] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK], DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK], DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK]] 2023-07-11 15:34:13,293 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-11 15:34:13,293 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:13,294 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C35817%2C1689089652358.meta, suffix=.meta, logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,35817,1689089652358, archiveDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs, maxLogs=32 2023-07-11 15:34:13,309 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK] 2023-07-11 15:34:13,309 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK] 2023-07-11 15:34:13,309 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK] 2023-07-11 15:34:13,311 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,35817,1689089652358/jenkins-hbase9.apache.org%2C35817%2C1689089652358.meta.1689089653295.meta 2023-07-11 15:34:13,311 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK], DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK], DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK]] 2023-07-11 15:34:13,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:13,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 15:34:13,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-11 15:34:13,312 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-11 15:34:13,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-11 15:34:13,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:13,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-11 15:34:13,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-11 15:34:13,313 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 15:34:13,314 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/info 2023-07-11 15:34:13,314 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/info 2023-07-11 15:34:13,315 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 15:34:13,315 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:13,315 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 15:34:13,316 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:34:13,316 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/rep_barrier 2023-07-11 15:34:13,316 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 15:34:13,317 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:13,317 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 15:34:13,317 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/table 2023-07-11 15:34:13,318 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/table 2023-07-11 15:34:13,318 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 15:34:13,318 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:13,319 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740 2023-07-11 15:34:13,320 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740 2023-07-11 15:34:13,322 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 15:34:13,323 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 15:34:13,325 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10443410080, jitterRate=-0.027381643652915955}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 15:34:13,325 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 15:34:13,325 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689089653279 2023-07-11 15:34:13,329 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-11 15:34:13,330 INFO [RS_OPEN_META-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-11 15:34:13,330 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase9.apache.org,35817,1689089652358, state=OPEN 2023-07-11 15:34:13,331 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 15:34:13,331 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 15:34:13,333 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-11 15:34:13,333 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase9.apache.org,35817,1689089652358 in 207 msec 2023-07-11 15:34:13,334 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-11 15:34:13,334 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 366 msec 2023-07-11 15:34:13,339 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 424 msec 2023-07-11 15:34:13,339 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689089653339, completionTime=-1 2023-07-11 15:34:13,339 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-11 15:34:13,340 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-11 15:34:13,343 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-11 15:34:13,343 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689089713343 2023-07-11 15:34:13,344 INFO [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689089773344 2023-07-11 15:34:13,344 INFO [master/jenkins-hbase9:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-11 15:34:13,350 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,37033,1689089652021-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,350 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,37033,1689089652021-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,350 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,37033,1689089652021-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,350 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase9:37033, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,350 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:13,350 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-11 15:34:13,350 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:13,351 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-11 15:34:13,352 DEBUG [master/jenkins-hbase9:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-11 15:34:13,352 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:13,353 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:34:13,354 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:13,355 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e empty. 2023-07-11 15:34:13,355 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:13,355 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-11 15:34:13,366 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:13,367 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => bec0bc3d4a60a2beb91b5784ba8a455e, NAME => 'hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp 2023-07-11 15:34:13,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:13,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing bec0bc3d4a60a2beb91b5784ba8a455e, disabling compactions & flushes 2023-07-11 15:34:13,375 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:13,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:13,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. after waiting 0 ms 2023-07-11 15:34:13,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:13,375 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:13,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for bec0bc3d4a60a2beb91b5784ba8a455e: 2023-07-11 15:34:13,377 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:34:13,378 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089653378"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089653378"}]},"ts":"1689089653378"} 2023-07-11 15:34:13,380 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:34:13,380 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:34:13,380 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089653380"}]},"ts":"1689089653380"} 2023-07-11 15:34:13,381 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-11 15:34:13,385 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:13,385 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:13,385 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:13,385 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:34:13,385 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:13,385 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=bec0bc3d4a60a2beb91b5784ba8a455e, ASSIGN}] 2023-07-11 15:34:13,386 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=bec0bc3d4a60a2beb91b5784ba8a455e, ASSIGN 2023-07-11 15:34:13,387 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=bec0bc3d4a60a2beb91b5784ba8a455e, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,35817,1689089652358; forceNewPlan=false, retain=false 2023-07-11 15:34:13,525 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,37033,1689089652021] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:13,527 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,37033,1689089652021] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-11 15:34:13,528 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:13,529 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:34:13,531 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:13,531 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0 empty. 2023-07-11 15:34:13,532 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:13,532 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-11 15:34:13,537 INFO [jenkins-hbase9:37033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:34:13,538 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=bec0bc3d4a60a2beb91b5784ba8a455e, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,538 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089653538"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089653538"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089653538"}]},"ts":"1689089653538"} 2023-07-11 15:34:13,542 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure bec0bc3d4a60a2beb91b5784ba8a455e, server=jenkins-hbase9.apache.org,35817,1689089652358}] 2023-07-11 15:34:13,562 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:13,563 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e8c87b4aafa4a2756a9b6f91d7103fb0, NAME => 'hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp 2023-07-11 15:34:13,573 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:13,573 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e8c87b4aafa4a2756a9b6f91d7103fb0, disabling compactions & flushes 2023-07-11 15:34:13,573 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:13,573 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:13,573 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. after waiting 0 ms 2023-07-11 15:34:13,573 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:13,573 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:13,573 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e8c87b4aafa4a2756a9b6f91d7103fb0: 2023-07-11 15:34:13,576 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:34:13,577 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089653576"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089653576"}]},"ts":"1689089653576"} 2023-07-11 15:34:13,578 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:34:13,579 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:34:13,579 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089653579"}]},"ts":"1689089653579"} 2023-07-11 15:34:13,580 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-11 15:34:13,585 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:13,585 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:13,585 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:13,585 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:34:13,585 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:13,585 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e8c87b4aafa4a2756a9b6f91d7103fb0, ASSIGN}] 2023-07-11 15:34:13,586 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e8c87b4aafa4a2756a9b6f91d7103fb0, ASSIGN 2023-07-11 15:34:13,587 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e8c87b4aafa4a2756a9b6f91d7103fb0, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,32969,1689089652518; forceNewPlan=false, retain=false 2023-07-11 15:34:13,697 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:13,697 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bec0bc3d4a60a2beb91b5784ba8a455e, NAME => 'hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:13,697 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:13,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:13,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:13,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:13,699 INFO [StoreOpener-bec0bc3d4a60a2beb91b5784ba8a455e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:13,700 DEBUG [StoreOpener-bec0bc3d4a60a2beb91b5784ba8a455e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e/info 2023-07-11 15:34:13,700 DEBUG [StoreOpener-bec0bc3d4a60a2beb91b5784ba8a455e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e/info 2023-07-11 15:34:13,700 INFO [StoreOpener-bec0bc3d4a60a2beb91b5784ba8a455e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bec0bc3d4a60a2beb91b5784ba8a455e columnFamilyName info 2023-07-11 15:34:13,701 INFO [StoreOpener-bec0bc3d4a60a2beb91b5784ba8a455e-1] regionserver.HStore(310): Store=bec0bc3d4a60a2beb91b5784ba8a455e/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:13,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:13,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:13,704 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:13,706 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:13,707 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened bec0bc3d4a60a2beb91b5784ba8a455e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10946179360, jitterRate=0.019442394375801086}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:13,707 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for bec0bc3d4a60a2beb91b5784ba8a455e: 2023-07-11 15:34:13,707 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e., pid=7, masterSystemTime=1689089653694 2023-07-11 15:34:13,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:13,709 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:13,710 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=bec0bc3d4a60a2beb91b5784ba8a455e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:13,710 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689089653710"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089653710"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089653710"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089653710"}]},"ts":"1689089653710"} 2023-07-11 15:34:13,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-11 15:34:13,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure bec0bc3d4a60a2beb91b5784ba8a455e, server=jenkins-hbase9.apache.org,35817,1689089652358 in 169 msec 2023-07-11 15:34:13,714 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-11 15:34:13,714 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=bec0bc3d4a60a2beb91b5784ba8a455e, ASSIGN in 327 msec 2023-07-11 15:34:13,714 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:34:13,714 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089653714"}]},"ts":"1689089653714"} 2023-07-11 15:34:13,716 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-11 15:34:13,718 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:34:13,719 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 368 msec 2023-07-11 15:34:13,737 INFO [jenkins-hbase9:37033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:34:13,738 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e8c87b4aafa4a2756a9b6f91d7103fb0, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,738 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089653738"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089653738"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089653738"}]},"ts":"1689089653738"} 2023-07-11 15:34:13,739 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure e8c87b4aafa4a2756a9b6f91d7103fb0, server=jenkins-hbase9.apache.org,32969,1689089652518}] 2023-07-11 15:34:13,752 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-11 15:34:13,753 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:13,753 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:13,757 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-11 15:34:13,763 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:13,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-11 15:34:13,779 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-11 15:34:13,780 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-11 15:34:13,780 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-11 15:34:13,897 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,897 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 15:34:13,898 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:57732, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 15:34:13,902 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:13,902 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e8c87b4aafa4a2756a9b6f91d7103fb0, NAME => 'hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:13,902 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 15:34:13,902 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. service=MultiRowMutationService 2023-07-11 15:34:13,903 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-11 15:34:13,903 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:13,903 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:13,903 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:13,903 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:13,904 INFO [StoreOpener-e8c87b4aafa4a2756a9b6f91d7103fb0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:13,906 DEBUG [StoreOpener-e8c87b4aafa4a2756a9b6f91d7103fb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0/m 2023-07-11 15:34:13,906 DEBUG [StoreOpener-e8c87b4aafa4a2756a9b6f91d7103fb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0/m 2023-07-11 15:34:13,906 INFO [StoreOpener-e8c87b4aafa4a2756a9b6f91d7103fb0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e8c87b4aafa4a2756a9b6f91d7103fb0 columnFamilyName m 2023-07-11 15:34:13,907 INFO [StoreOpener-e8c87b4aafa4a2756a9b6f91d7103fb0-1] regionserver.HStore(310): Store=e8c87b4aafa4a2756a9b6f91d7103fb0/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:13,907 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:13,908 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:13,911 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:13,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:13,917 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened e8c87b4aafa4a2756a9b6f91d7103fb0; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@12e2965d, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:13,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for e8c87b4aafa4a2756a9b6f91d7103fb0: 2023-07-11 15:34:13,918 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0., pid=9, masterSystemTime=1689089653896 2023-07-11 15:34:13,923 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:13,924 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:13,924 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e8c87b4aafa4a2756a9b6f91d7103fb0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:13,924 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689089653924"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089653924"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089653924"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089653924"}]},"ts":"1689089653924"} 2023-07-11 15:34:13,930 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-11 15:34:13,930 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure e8c87b4aafa4a2756a9b6f91d7103fb0, server=jenkins-hbase9.apache.org,32969,1689089652518 in 187 msec 2023-07-11 15:34:13,932 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-11 15:34:13,932 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e8c87b4aafa4a2756a9b6f91d7103fb0, ASSIGN in 345 msec 2023-07-11 15:34:13,939 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:13,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 163 msec 2023-07-11 15:34:13,945 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:34:13,945 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089653945"}]},"ts":"1689089653945"} 2023-07-11 15:34:13,946 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-11 15:34:13,949 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:34:13,954 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 424 msec 2023-07-11 15:34:13,958 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-11 15:34:13,960 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-11 15:34:13,960 INFO [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.224sec 2023-07-11 15:34:13,960 INFO [master/jenkins-hbase9:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-11 15:34:13,960 INFO [master/jenkins-hbase9:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-11 15:34:13,960 INFO [master/jenkins-hbase9:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-11 15:34:13,961 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,37033,1689089652021-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-11 15:34:13,961 INFO [master/jenkins-hbase9:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,37033,1689089652021-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-11 15:34:13,961 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-11 15:34:13,998 DEBUG [Listener at localhost/36775] zookeeper.ReadOnlyZKClient(139): Connect 0x7ecf429c to 127.0.0.1:64295 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:14,003 DEBUG [Listener at localhost/36775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b16a418, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:14,005 DEBUG [hconnection-0x1f28e8dd-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:34:14,009 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:46268, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:34:14,010 INFO [Listener at localhost/36775] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:14,011 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:14,030 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,37033,1689089652021] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:34:14,031 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:57736, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:34:14,033 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-11 15:34:14,033 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-11 15:34:14,044 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:14,044 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:14,045 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 15:34:14,046 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-11 15:34:14,113 DEBUG [Listener at localhost/36775] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-11 15:34:14,115 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:40918, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-11 15:34:14,118 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-11 15:34:14,118 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:14,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(492): Client=jenkins//172.31.2.10 set balanceSwitch=false 2023-07-11 15:34:14,119 DEBUG [Listener at localhost/36775] zookeeper.ReadOnlyZKClient(139): Connect 0x79221fdc to 127.0.0.1:64295 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:14,124 DEBUG [Listener at localhost/36775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@21947e25, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:14,124 INFO [Listener at localhost/36775] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:64295 2023-07-11 15:34:14,127 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:14,128 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10154f778bf000a connected 2023-07-11 15:34:14,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:14,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:14,133 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-11 15:34:14,144 INFO [Listener at localhost/36775] client.ConnectionUtils(127): regionserver/jenkins-hbase9:0 server-side Connection retries=45 2023-07-11 15:34:14,145 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:14,145 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:14,145 INFO [Listener at localhost/36775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 15:34:14,145 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 15:34:14,145 INFO [Listener at localhost/36775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 15:34:14,145 INFO [Listener at localhost/36775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 15:34:14,146 INFO [Listener at localhost/36775] ipc.NettyRpcServer(120): Bind to /172.31.2.10:39531 2023-07-11 15:34:14,146 INFO [Listener at localhost/36775] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 15:34:14,147 DEBUG [Listener at localhost/36775] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 15:34:14,147 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:14,148 INFO [Listener at localhost/36775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 15:34:14,149 INFO [Listener at localhost/36775] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39531 connecting to ZooKeeper ensemble=127.0.0.1:64295 2023-07-11 15:34:14,152 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:395310x0, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 15:34:14,153 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(162): regionserver:395310x0, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 15:34:14,154 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39531-0x10154f778bf000b connected 2023-07-11 15:34:14,155 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(162): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-11 15:34:14,155 DEBUG [Listener at localhost/36775] zookeeper.ZKUtil(164): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 15:34:14,155 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39531 2023-07-11 15:34:14,156 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39531 2023-07-11 15:34:14,156 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39531 2023-07-11 15:34:14,156 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39531 2023-07-11 15:34:14,156 DEBUG [Listener at localhost/36775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39531 2023-07-11 15:34:14,158 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 15:34:14,158 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 15:34:14,158 INFO [Listener at localhost/36775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 15:34:14,158 INFO [Listener at localhost/36775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 15:34:14,158 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 15:34:14,158 INFO [Listener at localhost/36775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 15:34:14,159 INFO [Listener at localhost/36775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 15:34:14,159 INFO [Listener at localhost/36775] http.HttpServer(1146): Jetty bound to port 44725 2023-07-11 15:34:14,159 INFO [Listener at localhost/36775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 15:34:14,160 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:14,161 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@781925b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,AVAILABLE} 2023-07-11 15:34:14,161 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:14,161 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10eaf42a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 15:34:14,291 INFO [Listener at localhost/36775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 15:34:14,291 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 15:34:14,291 INFO [Listener at localhost/36775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 15:34:14,292 INFO [Listener at localhost/36775] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 15:34:14,292 INFO [Listener at localhost/36775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 15:34:14,293 INFO [Listener at localhost/36775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7039aa30{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/java.io.tmpdir/jetty-0_0_0_0-44725-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2673598390006359428/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:14,295 INFO [Listener at localhost/36775] server.AbstractConnector(333): Started ServerConnector@3fb94637{HTTP/1.1, (http/1.1)}{0.0.0.0:44725} 2023-07-11 15:34:14,295 INFO [Listener at localhost/36775] server.Server(415): Started @45789ms 2023-07-11 15:34:14,298 INFO [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(951): ClusterId : d070db4e-44d0-4159-ade3-3168001bc910 2023-07-11 15:34:14,299 DEBUG [RS:3;jenkins-hbase9:39531] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 15:34:14,302 DEBUG [RS:3;jenkins-hbase9:39531] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 15:34:14,302 DEBUG [RS:3;jenkins-hbase9:39531] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 15:34:14,303 DEBUG [RS:3;jenkins-hbase9:39531] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 15:34:14,307 DEBUG [RS:3;jenkins-hbase9:39531] zookeeper.ReadOnlyZKClient(139): Connect 0x386cbc9e to 127.0.0.1:64295 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 15:34:14,311 DEBUG [RS:3;jenkins-hbase9:39531] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36ab45e3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 15:34:14,311 DEBUG [RS:3;jenkins-hbase9:39531] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3294d7ee, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:14,320 DEBUG [RS:3;jenkins-hbase9:39531] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase9:39531 2023-07-11 15:34:14,320 INFO [RS:3;jenkins-hbase9:39531] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 15:34:14,320 INFO [RS:3;jenkins-hbase9:39531] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 15:34:14,320 DEBUG [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 15:34:14,320 INFO [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase9.apache.org,37033,1689089652021 with isa=jenkins-hbase9.apache.org/172.31.2.10:39531, startcode=1689089654144 2023-07-11 15:34:14,320 DEBUG [RS:3;jenkins-hbase9:39531] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 15:34:14,322 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:33673, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 15:34:14,323 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37033] master.ServerManager(394): Registering regionserver=jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:14,323 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 15:34:14,323 DEBUG [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74 2023-07-11 15:34:14,323 DEBUG [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44357 2023-07-11 15:34:14,323 DEBUG [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33289 2023-07-11 15:34:14,329 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:14,329 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:14,329 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:14,329 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:14,329 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:14,330 DEBUG [RS:3;jenkins-hbase9:39531] zookeeper.ZKUtil(162): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:14,330 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase9.apache.org,39531,1689089654144] 2023-07-11 15:34:14,330 WARN [RS:3;jenkins-hbase9:39531] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 15:34:14,330 INFO [RS:3;jenkins-hbase9:39531] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 15:34:14,330 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 15:34:14,330 DEBUG [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:14,330 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:14,330 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:14,331 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:14,331 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:14,331 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:14,334 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:14,334 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:14,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:14,334 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-11 15:34:14,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:14,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:14,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:14,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:14,338 DEBUG [RS:3;jenkins-hbase9:39531] zookeeper.ZKUtil(162): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:14,338 DEBUG [RS:3;jenkins-hbase9:39531] zookeeper.ZKUtil(162): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:14,338 DEBUG [RS:3;jenkins-hbase9:39531] zookeeper.ZKUtil(162): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:14,338 DEBUG [RS:3;jenkins-hbase9:39531] zookeeper.ZKUtil(162): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:14,339 DEBUG [RS:3;jenkins-hbase9:39531] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 15:34:14,339 INFO [RS:3;jenkins-hbase9:39531] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 15:34:14,340 INFO [RS:3;jenkins-hbase9:39531] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 15:34:14,341 INFO [RS:3;jenkins-hbase9:39531] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 15:34:14,341 INFO [RS:3;jenkins-hbase9:39531] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:14,341 INFO [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 15:34:14,343 INFO [RS:3;jenkins-hbase9:39531] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:14,343 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:14,343 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:14,343 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:14,344 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:14,344 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:14,344 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase9:0, corePoolSize=2, maxPoolSize=2 2023-07-11 15:34:14,344 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:14,344 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:14,344 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:14,344 DEBUG [RS:3;jenkins-hbase9:39531] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase9:0, corePoolSize=1, maxPoolSize=1 2023-07-11 15:34:14,345 INFO [RS:3;jenkins-hbase9:39531] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:14,345 INFO [RS:3;jenkins-hbase9:39531] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:14,345 INFO [RS:3;jenkins-hbase9:39531] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:14,356 INFO [RS:3;jenkins-hbase9:39531] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 15:34:14,356 INFO [RS:3;jenkins-hbase9:39531] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase9.apache.org,39531,1689089654144-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 15:34:14,369 INFO [RS:3;jenkins-hbase9:39531] regionserver.Replication(203): jenkins-hbase9.apache.org,39531,1689089654144 started 2023-07-11 15:34:14,369 INFO [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1637): Serving as jenkins-hbase9.apache.org,39531,1689089654144, RpcServer on jenkins-hbase9.apache.org/172.31.2.10:39531, sessionid=0x10154f778bf000b 2023-07-11 15:34:14,369 DEBUG [RS:3;jenkins-hbase9:39531] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 15:34:14,369 DEBUG [RS:3;jenkins-hbase9:39531] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:14,369 DEBUG [RS:3;jenkins-hbase9:39531] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,39531,1689089654144' 2023-07-11 15:34:14,369 DEBUG [RS:3;jenkins-hbase9:39531] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 15:34:14,369 DEBUG [RS:3;jenkins-hbase9:39531] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 15:34:14,370 DEBUG [RS:3;jenkins-hbase9:39531] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 15:34:14,370 DEBUG [RS:3;jenkins-hbase9:39531] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 15:34:14,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:14,370 DEBUG [RS:3;jenkins-hbase9:39531] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:14,370 DEBUG [RS:3;jenkins-hbase9:39531] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase9.apache.org,39531,1689089654144' 2023-07-11 15:34:14,370 DEBUG [RS:3;jenkins-hbase9:39531] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 15:34:14,370 DEBUG [RS:3;jenkins-hbase9:39531] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 15:34:14,371 DEBUG [RS:3;jenkins-hbase9:39531] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 15:34:14,371 INFO [RS:3;jenkins-hbase9:39531] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 15:34:14,371 INFO [RS:3;jenkins-hbase9:39531] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 15:34:14,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:14,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:14,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:14,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:14,376 DEBUG [hconnection-0x6bab138e-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:34:14,377 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:46272, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:34:14,381 DEBUG [hconnection-0x6bab138e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 15:34:14,382 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.2.10:57744, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 15:34:14,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:14,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:14,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37033] to rsgroup master 2023-07-11 15:34:14,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:14,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:40918 deadline: 1689090854386, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. 2023-07-11 15:34:14,387 WARN [Listener at localhost/36775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:14,388 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:14,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:14,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:14,389 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:32969, jenkins-hbase9.apache.org:35817, jenkins-hbase9.apache.org:39531, jenkins-hbase9.apache.org:41645], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:14,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:14,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:14,450 INFO [Listener at localhost/36775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=552 (was 512) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data2/current/BP-1044115924-172.31.2.10-1689089651275 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1412254669_17 at /127.0.0.1:60404 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 943599658@qtp-781731978-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x183af44f-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_360324249_17 at /127.0.0.1:40678 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1348090937-2306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6266862b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase9:41645-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1c0e734a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36297-SendThread(127.0.0.1:51551) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase9:39531Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6bab138e-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1877189974_17 at /127.0.0.1:40628 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp454770656-2243 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x183af44f-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_360324249_17 at /127.0.0.1:60436 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x79221fdc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/582576010.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase9:35817-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_360324249_17 at /127.0.0.1:60410 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-58e40637-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74-prefix:jenkins-hbase9.apache.org,32969,1689089652518 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp154596617-2208 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 43565 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@392ff27d java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1015159813) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: hconnection-0x183af44f-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: hconnection-0x183af44f-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1348090937-2304 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1378310117-2273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1044115924-172.31.2.10-1689089651275 heartbeating to localhost/127.0.0.1:44357 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-544-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1711346392-2313 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:44357 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x2f4ffe65-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: M:0;jenkins-hbase9:37033 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1804903225@qtp-2093948479-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35045 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1877189974_17 at /127.0.0.1:44010 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:64295): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x2f4ffe65 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/582576010.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp154596617-2212 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:46437 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1711346392-2312 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1378310117-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1213740349-2578 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:44357 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_605997357_17 at /127.0.0.1:40670 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x386cbc9e-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1711346392-2314-acceptor-0@5210c7a2-ServerConnector@68c6f068{HTTP/1.1, (http/1.1)}{0.0.0.0:43861} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 44357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp454770656-2240-acceptor-0@681e5597-ServerConnector@2a106a9f{HTTP/1.1, (http/1.1)}{0.0.0.0:35391} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:35817Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_360324249_17 at /127.0.0.1:40662 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase9:39531-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x29df1fab-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x72ba8b0e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1711346392-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:44357 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x386cbc9e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/582576010.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1725412317@qtp-1393711106-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData-prefix:jenkins-hbase9.apache.org,37033,1689089652021 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1213740349-2577-acceptor-0@1ea4a8d8-ServerConnector@3fb94637{HTTP/1.1, (http/1.1)}{0.0.0.0:44725} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1877189974_17 at /127.0.0.1:60376 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1348090937-2303 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp154596617-2214 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:44357 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 36775 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 2 on default port 44153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-1d66a142-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1213740349-2581 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_605997357_17 at /127.0.0.1:60334 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 44357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_360324249_17 at /127.0.0.1:44056 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp154596617-2209-acceptor-0@12973fef-ServerConnector@26366166{HTTP/1.1, (http/1.1)}{0.0.0.0:33289} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:44357 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:32969Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data3/current/BP-1044115924-172.31.2.10-1689089651275 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase9:39531 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 36775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 43565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_360324249_17 at /127.0.0.1:44068 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:46437 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/36775.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1213740349-2579 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x72ba8b0e-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:46437 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp454770656-2242 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1348090937-2300-acceptor-0@1306cf76-ServerConnector@26dd9db0{HTTP/1.1, (http/1.1)}{0.0.0.0:33549} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1348090937-2301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2947125d sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x29df1fab-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-563-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1601020609@qtp-2093948479-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: hconnection-0x6bab138e-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase9:41645Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 44357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp454770656-2245 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089652925 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data1/current/BP-1044115924-172.31.2.10-1689089651275 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_605997357_17 at /127.0.0.1:60422 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1213740349-2580 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1378310117-2275 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@10d8ee46[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x7ecf429c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/582576010.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x72ba8b0e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/582576010.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 44357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1711346392-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x79221fdc-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:1;jenkins-hbase9:35817 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51551@0x35fa0ee6-SendThread(127.0.0.1:51551) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: Session-HouseKeeper-3da779a5-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1378310117-2271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2097656594@qtp-781731978-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44659 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: hconnection-0x183af44f-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@4b3103ab sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:44357 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/36775-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x1f28e8dd-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase9:32969 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1044115924-172.31.2.10-1689089651275 heartbeating to localhost/127.0.0.1:44357 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_360324249_17 at /127.0.0.1:43982 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x32533615 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/582576010.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:44357 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:44357 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1378310117-2276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase9:41645 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74-prefix:jenkins-hbase9.apache.org,35817,1689089652358.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1022215812@qtp-1383519925-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38037 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36297-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 264715690@qtp-1393711106-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35685 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp454770656-2244 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@2b8dff0b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 43565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1378310117-2270-acceptor-0@2ece3ec5-ServerConnector@3b512c06{HTTP/1.1, (http/1.1)}{0.0.0.0:38863} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x7ecf429c-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 44357 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2aa6aee7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:46437 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1213740349-2583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_605997357_17 at /127.0.0.1:44062 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51551@0x35fa0ee6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@75a3dd81 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp454770656-2239 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data5/current/BP-1044115924-172.31.2.10-1689089651275 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1412254669_17 at /127.0.0.1:40654 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-4bc8a2d7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1711346392-2311 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,37033,1689089652021 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: IPC Server idle connection scanner for port 44153 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@7327da22 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 186026445@qtp-1383519925-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@4f520d6e sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp154596617-2213 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1348090937-2299 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 44153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:44357 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 4 on default port 36775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 44153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51551@0x35fa0ee6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/582576010.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 43565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1213740349-2576 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:46437 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1378310117-2272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x32533615-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: BP-1044115924-172.31.2.10-1689089651275 heartbeating to localhost/127.0.0.1:44357 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1643076188) connection to localhost/127.0.0.1:46437 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp454770656-2241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x2f4ffe65-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@1c3f7275 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1213740349-2582 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x183af44f-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1412254669_17 at /127.0.0.1:44040 [Receiving block BP-1044115924-172.31.2.10-1689089651275:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data4/current/BP-1044115924-172.31.2.10-1689089651275 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp154596617-2215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x79221fdc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x183af44f-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1348090937-2305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp154596617-2211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089652926 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data6/current/BP-1044115924-172.31.2.10-1689089651275 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775-SendThread(127.0.0.1:64295) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74-prefix:jenkins-hbase9.apache.org,35817,1689089652358 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1348090937-2302 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1378310117-2269 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:46437 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36775 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:64295 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x386cbc9e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x7ecf429c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 43565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:46437 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@5fb2017a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@32c4de6e java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@3fdfc87b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1044115924-172.31.2.10-1689089651275:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 44153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x29df1fab sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/582576010.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x183af44f-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:44357 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1711346392-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp454770656-2246 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:46437 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 44357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-4316d895-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1711346392-2310 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1653828753.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@6fadaf0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74-prefix:jenkins-hbase9.apache.org,41645,1689089652214 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 44153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64295@0x32533615-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp154596617-2210 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase9:32969-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase9.apache.org,38729,1689089645968 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) - Thread LEAK? -, OpenFileDescriptor=825 (was 794) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 451), ProcessCount=176 (was 176), AvailableMemoryMB=6012 (was 6280) 2023-07-11 15:34:14,454 WARN [Listener at localhost/36775] hbase.ResourceChecker(130): Thread=552 is superior to 500 2023-07-11 15:34:14,473 INFO [RS:3;jenkins-hbase9:39531] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase9.apache.org%2C39531%2C1689089654144, suffix=, logDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,39531,1689089654144, archiveDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs, maxLogs=32 2023-07-11 15:34:14,486 INFO [Listener at localhost/36775] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=552, OpenFileDescriptor=825, MaxFileDescriptor=60000, SystemLoadAverage=404, ProcessCount=176, AvailableMemoryMB=6011 2023-07-11 15:34:14,486 WARN [Listener at localhost/36775] hbase.ResourceChecker(130): Thread=552 is superior to 500 2023-07-11 15:34:14,487 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-11 15:34:14,494 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK] 2023-07-11 15:34:14,499 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK] 2023-07-11 15:34:14,499 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK] 2023-07-11 15:34:14,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:14,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:14,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:14,502 INFO [RS:3;jenkins-hbase9:39531] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/WALs/jenkins-hbase9.apache.org,39531,1689089654144/jenkins-hbase9.apache.org%2C39531%2C1689089654144.1689089654473 2023-07-11 15:34:14,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:14,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:14,503 DEBUG [RS:3;jenkins-hbase9:39531] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46759,DS-a72bbe0e-a1f1-4475-ac24-0783d7123af0,DISK], DatanodeInfoWithStorage[127.0.0.1:44981,DS-bae447f6-c340-465b-8f12-4e0db91ea62e,DISK], DatanodeInfoWithStorage[127.0.0.1:41069,DS-f2c8cf9b-30e5-48c7-9df6-495f25435771,DISK]] 2023-07-11 15:34:14,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:14,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:14,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:14,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:14,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:14,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:14,513 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:14,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:14,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:14,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:14,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:14,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:14,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:14,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:14,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37033] to rsgroup master 2023-07-11 15:34:14,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:14,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:40918 deadline: 1689090854524, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. 2023-07-11 15:34:14,525 WARN [Listener at localhost/36775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:14,527 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:14,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:14,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:14,528 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:32969, jenkins-hbase9.apache.org:35817, jenkins-hbase9.apache.org:39531, jenkins-hbase9.apache.org:41645], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:14,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:14,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:14,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:14,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-11 15:34:14,533 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:14,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(700): Client=jenkins//172.31.2.10 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-11 15:34:14,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-11 15:34:14,534 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:14,535 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:14,535 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:14,537 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 15:34:14,538 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/default/t1/4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:14,539 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/default/t1/4fa83d5b465a37980d63d638e887d51e empty. 2023-07-11 15:34:14,539 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/default/t1/4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:14,539 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-11 15:34:14,556 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-11 15:34:14,557 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4fa83d5b465a37980d63d638e887d51e, NAME => 't1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp 2023-07-11 15:34:14,568 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:14,568 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 4fa83d5b465a37980d63d638e887d51e, disabling compactions & flushes 2023-07-11 15:34:14,568 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:14,568 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:14,568 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. after waiting 0 ms 2023-07-11 15:34:14,568 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:14,568 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:14,568 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 4fa83d5b465a37980d63d638e887d51e: 2023-07-11 15:34:14,570 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 15:34:14,571 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 15:34:14,571 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-11 15:34:14,571 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089654571"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089654571"}]},"ts":"1689089654571"} 2023-07-11 15:34:14,571 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:34:14,571 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-11 15:34:14,571 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 15:34:14,571 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-11 15:34:14,572 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 15:34:14,573 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 15:34:14,573 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089654573"}]},"ts":"1689089654573"} 2023-07-11 15:34:14,574 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-11 15:34:14,577 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase9.apache.org=0} racks are {/default-rack=0} 2023-07-11 15:34:14,578 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 15:34:14,578 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 15:34:14,578 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 15:34:14,578 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-11 15:34:14,578 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 15:34:14,578 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=4fa83d5b465a37980d63d638e887d51e, ASSIGN}] 2023-07-11 15:34:14,579 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=4fa83d5b465a37980d63d638e887d51e, ASSIGN 2023-07-11 15:34:14,579 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=4fa83d5b465a37980d63d638e887d51e, ASSIGN; state=OFFLINE, location=jenkins-hbase9.apache.org,35817,1689089652358; forceNewPlan=false, retain=false 2023-07-11 15:34:14,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-11 15:34:14,730 INFO [jenkins-hbase9:37033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 15:34:14,730 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=4fa83d5b465a37980d63d638e887d51e, regionState=OPENING, regionLocation=jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:14,731 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089654730"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089654730"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089654730"}]},"ts":"1689089654730"} 2023-07-11 15:34:14,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 4fa83d5b465a37980d63d638e887d51e, server=jenkins-hbase9.apache.org,35817,1689089652358}] 2023-07-11 15:34:14,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-11 15:34:14,888 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(130): Open t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:14,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4fa83d5b465a37980d63d638e887d51e, NAME => 't1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.', STARTKEY => '', ENDKEY => ''} 2023-07-11 15:34:14,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:14,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(866): Instantiated t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 15:34:14,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7894): checking encryption for 4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:14,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(7897): checking classloading for 4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:14,890 INFO [StoreOpener-4fa83d5b465a37980d63d638e887d51e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:14,891 DEBUG [StoreOpener-4fa83d5b465a37980d63d638e887d51e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/default/t1/4fa83d5b465a37980d63d638e887d51e/cf1 2023-07-11 15:34:14,891 DEBUG [StoreOpener-4fa83d5b465a37980d63d638e887d51e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/default/t1/4fa83d5b465a37980d63d638e887d51e/cf1 2023-07-11 15:34:14,891 INFO [StoreOpener-4fa83d5b465a37980d63d638e887d51e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4fa83d5b465a37980d63d638e887d51e columnFamilyName cf1 2023-07-11 15:34:14,892 INFO [StoreOpener-4fa83d5b465a37980d63d638e887d51e-1] regionserver.HStore(310): Store=4fa83d5b465a37980d63d638e887d51e/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 15:34:14,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/default/t1/4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:14,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/default/t1/4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:14,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1055): writing seq id for 4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:14,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/default/t1/4fa83d5b465a37980d63d638e887d51e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 15:34:14,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1072): Opened 4fa83d5b465a37980d63d638e887d51e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10640774240, jitterRate=-0.00900067389011383}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 15:34:14,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(965): Region open journal for 4fa83d5b465a37980d63d638e887d51e: 2023-07-11 15:34:14,898 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e., pid=14, masterSystemTime=1689089654883 2023-07-11 15:34:14,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:14,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase9:0-0] handler.AssignRegionHandler(158): Opened t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:14,900 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=4fa83d5b465a37980d63d638e887d51e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:14,900 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089654900"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689089654900"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689089654900"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689089654900"}]},"ts":"1689089654900"} 2023-07-11 15:34:14,906 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-11 15:34:14,906 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 4fa83d5b465a37980d63d638e887d51e, server=jenkins-hbase9.apache.org,35817,1689089652358 in 169 msec 2023-07-11 15:34:14,907 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-11 15:34:14,907 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=4fa83d5b465a37980d63d638e887d51e, ASSIGN in 328 msec 2023-07-11 15:34:14,907 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 15:34:14,908 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089654907"}]},"ts":"1689089654907"} 2023-07-11 15:34:14,909 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-11 15:34:14,910 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 15:34:14,912 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 381 msec 2023-07-11 15:34:15,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-11 15:34:15,137 INFO [Listener at localhost/36775] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-11 15:34:15,138 DEBUG [Listener at localhost/36775] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-11 15:34:15,138 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:15,140 INFO [Listener at localhost/36775] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-11 15:34:15,140 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:15,140 INFO [Listener at localhost/36775] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-11 15:34:15,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.HMaster$4(2112): Client=jenkins//172.31.2.10 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 15:34:15,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-11 15:34:15,144 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 15:34:15,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-11 15:34:15,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.2.10:40918 deadline: 1689089715141, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-11 15:34:15,147 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:15,148 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-11 15:34:15,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:15,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:15,248 INFO [Listener at localhost/36775] client.HBaseAdmin$15(890): Started disable of t1 2023-07-11 15:34:15,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.HMaster$11(2418): Client=jenkins//172.31.2.10 disable t1 2023-07-11 15:34:15,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-11 15:34:15,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 15:34:15,253 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089655252"}]},"ts":"1689089655252"} 2023-07-11 15:34:15,254 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-11 15:34:15,255 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-11 15:34:15,256 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=4fa83d5b465a37980d63d638e887d51e, UNASSIGN}] 2023-07-11 15:34:15,256 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=4fa83d5b465a37980d63d638e887d51e, UNASSIGN 2023-07-11 15:34:15,257 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=4fa83d5b465a37980d63d638e887d51e, regionState=CLOSING, regionLocation=jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:15,257 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089655257"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689089655257"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689089655257"}]},"ts":"1689089655257"} 2023-07-11 15:34:15,258 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 4fa83d5b465a37980d63d638e887d51e, server=jenkins-hbase9.apache.org,35817,1689089652358}] 2023-07-11 15:34:15,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 15:34:15,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(111): Close 4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:15,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 4fa83d5b465a37980d63d638e887d51e, disabling compactions & flushes 2023-07-11 15:34:15,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:15,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:15,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. after waiting 0 ms 2023-07-11 15:34:15,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:15,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/default/t1/4fa83d5b465a37980d63d638e887d51e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 15:34:15,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e. 2023-07-11 15:34:15,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 4fa83d5b465a37980d63d638e887d51e: 2023-07-11 15:34:15,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.UnassignRegionHandler(149): Closed 4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:15,417 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=4fa83d5b465a37980d63d638e887d51e, regionState=CLOSED 2023-07-11 15:34:15,417 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689089655417"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689089655417"}]},"ts":"1689089655417"} 2023-07-11 15:34:15,419 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-11 15:34:15,419 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 4fa83d5b465a37980d63d638e887d51e, server=jenkins-hbase9.apache.org,35817,1689089652358 in 160 msec 2023-07-11 15:34:15,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-11 15:34:15,421 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=4fa83d5b465a37980d63d638e887d51e, UNASSIGN in 163 msec 2023-07-11 15:34:15,421 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689089655421"}]},"ts":"1689089655421"} 2023-07-11 15:34:15,422 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-11 15:34:15,423 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-11 15:34:15,425 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 175 msec 2023-07-11 15:34:15,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 15:34:15,554 INFO [Listener at localhost/36775] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-11 15:34:15,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.HMaster$5(2228): Client=jenkins//172.31.2.10 delete t1 2023-07-11 15:34:15,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-11 15:34:15,557 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-11 15:34:15,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-11 15:34:15,558 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-11 15:34:15,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:15,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:15,561 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/default/t1/4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:15,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-11 15:34:15,563 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/default/t1/4fa83d5b465a37980d63d638e887d51e/cf1, FileablePath, hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/default/t1/4fa83d5b465a37980d63d638e887d51e/recovered.edits] 2023-07-11 15:34:15,568 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/default/t1/4fa83d5b465a37980d63d638e887d51e/recovered.edits/4.seqid to hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/archive/data/default/t1/4fa83d5b465a37980d63d638e887d51e/recovered.edits/4.seqid 2023-07-11 15:34:15,568 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/.tmp/data/default/t1/4fa83d5b465a37980d63d638e887d51e 2023-07-11 15:34:15,569 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-11 15:34:15,571 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-11 15:34:15,572 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-11 15:34:15,573 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-11 15:34:15,574 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-11 15:34:15,574 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-11 15:34:15,574 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689089655574"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:15,576 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 15:34:15,576 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 4fa83d5b465a37980d63d638e887d51e, NAME => 't1,,1689089654530.4fa83d5b465a37980d63d638e887d51e.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 15:34:15,576 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-11 15:34:15,576 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689089655576"}]},"ts":"9223372036854775807"} 2023-07-11 15:34:15,577 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-11 15:34:15,579 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-11 15:34:15,580 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 24 msec 2023-07-11 15:34:15,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-11 15:34:15,663 INFO [Listener at localhost/36775] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-11 15:34:15,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:15,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:15,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:15,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:15,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:15,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:15,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:15,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:15,680 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:15,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:15,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:15,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:15,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:15,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37033] to rsgroup master 2023-07-11 15:34:15,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:15,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.2.10:40918 deadline: 1689090855689, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. 2023-07-11 15:34:15,690 WARN [Listener at localhost/36775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:15,693 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:15,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,694 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:32969, jenkins-hbase9.apache.org:35817, jenkins-hbase9.apache.org:39531, jenkins-hbase9.apache.org:41645], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:15,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:15,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:15,712 INFO [Listener at localhost/36775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=566 (was 552) - Thread LEAK? -, OpenFileDescriptor=837 (was 825) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 404), ProcessCount=176 (was 176), AvailableMemoryMB=6003 (was 6011) 2023-07-11 15:34:15,713 WARN [Listener at localhost/36775] hbase.ResourceChecker(130): Thread=566 is superior to 500 2023-07-11 15:34:15,729 INFO [Listener at localhost/36775] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=566, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=404, ProcessCount=176, AvailableMemoryMB=6003 2023-07-11 15:34:15,729 WARN [Listener at localhost/36775] hbase.ResourceChecker(130): Thread=566 is superior to 500 2023-07-11 15:34:15,729 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-11 15:34:15,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:15,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:15,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:15,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:15,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:15,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:15,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:15,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:15,742 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:15,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:15,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:15,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:15,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:15,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37033] to rsgroup master 2023-07-11 15:34:15,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:15,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:40918 deadline: 1689090855751, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. 2023-07-11 15:34:15,751 WARN [Listener at localhost/36775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:15,753 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:15,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,754 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:32969, jenkins-hbase9.apache.org:35817, jenkins-hbase9.apache.org:39531, jenkins-hbase9.apache.org:41645], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:15,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:15,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:15,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-11 15:34:15,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:34:15,756 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-11 15:34:15,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-11 15:34:15,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 15:34:15,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:15,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:15,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:15,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:15,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:15,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:15,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:15,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:15,773 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:15,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:15,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:15,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:15,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:15,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37033] to rsgroup master 2023-07-11 15:34:15,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:15,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:40918 deadline: 1689090855781, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. 2023-07-11 15:34:15,781 WARN [Listener at localhost/36775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:15,783 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:15,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,784 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:32969, jenkins-hbase9.apache.org:35817, jenkins-hbase9.apache.org:39531, jenkins-hbase9.apache.org:41645], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:15,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:15,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:15,804 INFO [Listener at localhost/36775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=568 (was 566) - Thread LEAK? -, OpenFileDescriptor=837 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 404), ProcessCount=176 (was 176), AvailableMemoryMB=6003 (was 6003) 2023-07-11 15:34:15,804 WARN [Listener at localhost/36775] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-11 15:34:15,823 INFO [Listener at localhost/36775] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=568, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=404, ProcessCount=176, AvailableMemoryMB=6003 2023-07-11 15:34:15,823 WARN [Listener at localhost/36775] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-11 15:34:15,823 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-11 15:34:15,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:15,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:15,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:15,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:15,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:15,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:15,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:15,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:15,835 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:15,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:15,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:15,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:15,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:15,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37033] to rsgroup master 2023-07-11 15:34:15,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:15,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:40918 deadline: 1689090855846, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. 2023-07-11 15:34:15,846 WARN [Listener at localhost/36775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:15,848 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:15,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,849 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:32969, jenkins-hbase9.apache.org:35817, jenkins-hbase9.apache.org:39531, jenkins-hbase9.apache.org:41645], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:15,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:15,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:15,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:15,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:15,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:15,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:15,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:15,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:15,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:15,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:15,863 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:15,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:15,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:15,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:15,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:15,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37033] to rsgroup master 2023-07-11 15:34:15,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:15,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:40918 deadline: 1689090855872, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. 2023-07-11 15:34:15,873 WARN [Listener at localhost/36775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:15,875 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:15,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,876 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:32969, jenkins-hbase9.apache.org:35817, jenkins-hbase9.apache.org:39531, jenkins-hbase9.apache.org:41645], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:15,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:15,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:15,897 INFO [Listener at localhost/36775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=569 (was 568) - Thread LEAK? -, OpenFileDescriptor=837 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 404), ProcessCount=176 (was 176), AvailableMemoryMB=6002 (was 6003) 2023-07-11 15:34:15,897 WARN [Listener at localhost/36775] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-11 15:34:15,917 INFO [Listener at localhost/36775] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=569, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=404, ProcessCount=176, AvailableMemoryMB=6002 2023-07-11 15:34:15,917 WARN [Listener at localhost/36775] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-11 15:34:15,917 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-11 15:34:15,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:15,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:15,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:15,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:15,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:15,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:15,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:15,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:15,929 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:15,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:15,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:15,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:15,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:15,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37033] to rsgroup master 2023-07-11 15:34:15,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:15,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:40918 deadline: 1689090855938, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. 2023-07-11 15:34:15,939 WARN [Listener at localhost/36775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:15,941 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:15,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,942 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:32969, jenkins-hbase9.apache.org:35817, jenkins-hbase9.apache.org:39531, jenkins-hbase9.apache.org:41645], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:15,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:15,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:15,943 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-11 15:34:15,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_foo 2023-07-11 15:34:15,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-11 15:34:15,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:15,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:15,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 15:34:15,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:15,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:15,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:15,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.HMaster$15(3014): Client=jenkins//172.31.2.10 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-11 15:34:15,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-11 15:34:15,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 15:34:15,961 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:15,964 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-11 15:34:16,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 15:34:16,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_foo 2023-07-11 15:34:16,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:16,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.2.10:40918 deadline: 1689090856059, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-11 15:34:16,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.HMaster$16(3053): Client=jenkins//172.31.2.10 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-11 15:34:16,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-11 15:34:16,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-11 15:34:16,079 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-11 15:34:16,080 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-11 15:34:16,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-11 15:34:16,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup Group_anotherGroup 2023-07-11 15:34:16,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-11 15:34:16,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:16,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-11 15:34:16,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:16,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 15:34:16,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:16,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:16,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:16,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.HMaster$17(3086): Client=jenkins//172.31.2.10 delete Group_foo 2023-07-11 15:34:16,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 15:34:16,195 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 15:34:16,197 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 15:34:16,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-11 15:34:16,198 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 15:34:16,199 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-11 15:34:16,199 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 15:34:16,200 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 15:34:16,201 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 15:34:16,202 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 8 msec 2023-07-11 15:34:16,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-11 15:34:16,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_foo 2023-07-11 15:34:16,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-11 15:34:16,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:16,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:16,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-11 15:34:16,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:16,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:16,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:16,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:16,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.2.10:40918 deadline: 1689089716308, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-11 15:34:16,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:16,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:16,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:16,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:16,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:16,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:16,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:16,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup Group_anotherGroup 2023-07-11 15:34:16,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:16,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:16,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 15:34:16,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:16,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.2.10 move tables [] to rsgroup default 2023-07-11 15:34:16,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 15:34:16,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveTables 2023-07-11 15:34:16,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [] to rsgroup default 2023-07-11 15:34:16,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.MoveServers 2023-07-11 15:34:16,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.2.10 remove rsgroup master 2023-07-11 15:34:16,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:16,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 15:34:16,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 15:34:16,325 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 15:34:16,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.2.10 add rsgroup master 2023-07-11 15:34:16,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 15:34:16,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 15:34:16,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 15:34:16,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 15:34:16,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:16,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:16,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.2.10 move servers [jenkins-hbase9.apache.org:37033] to rsgroup master 2023-07-11 15:34:16,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 15:34:16,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.2.10:40918 deadline: 1689090856334, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. 2023-07-11 15:34:16,334 WARN [Listener at localhost/36775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase9.apache.org:37033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 15:34:16,336 INFO [Listener at localhost/36775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 15:34:16,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.2.10 list rsgroup 2023-07-11 15:34:16,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 15:34:16,337 INFO [Listener at localhost/36775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase9.apache.org:32969, jenkins-hbase9.apache.org:35817, jenkins-hbase9.apache.org:39531, jenkins-hbase9.apache.org:41645], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 15:34:16,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.2.10 initiates rsgroup info retrieval, group=default 2023-07-11 15:34:16,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.2.10) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 15:34:16,355 INFO [Listener at localhost/36775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=569 (was 569), OpenFileDescriptor=833 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=404 (was 404), ProcessCount=176 (was 176), AvailableMemoryMB=6001 (was 6002) 2023-07-11 15:34:16,355 WARN [Listener at localhost/36775] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-11 15:34:16,355 INFO [Listener at localhost/36775] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-11 15:34:16,355 INFO [Listener at localhost/36775] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-11 15:34:16,355 DEBUG [Listener at localhost/36775] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7ecf429c to 127.0.0.1:64295 2023-07-11 15:34:16,355 DEBUG [Listener at localhost/36775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,356 DEBUG [Listener at localhost/36775] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-11 15:34:16,356 DEBUG [Listener at localhost/36775] util.JVMClusterUtil(257): Found active master hash=403029915, stopped=false 2023-07-11 15:34:16,356 DEBUG [Listener at localhost/36775] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 15:34:16,356 DEBUG [Listener at localhost/36775] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 15:34:16,356 INFO [Listener at localhost/36775] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:16,357 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:16,357 INFO [Listener at localhost/36775] procedure2.ProcedureExecutor(629): Stopping 2023-07-11 15:34:16,357 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:16,357 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:16,357 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:16,357 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 15:34:16,358 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:16,358 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:16,358 DEBUG [Listener at localhost/36775] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x29df1fab to 127.0.0.1:64295 2023-07-11 15:34:16,358 DEBUG [Listener at localhost/36775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,358 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:16,358 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:16,358 INFO [Listener at localhost/36775] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,41645,1689089652214' ***** 2023-07-11 15:34:16,358 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:16,358 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 15:34:16,358 INFO [Listener at localhost/36775] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:16,359 INFO [Listener at localhost/36775] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,35817,1689089652358' ***** 2023-07-11 15:34:16,359 INFO [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:16,359 INFO [Listener at localhost/36775] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:16,359 INFO [Listener at localhost/36775] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,32969,1689089652518' ***** 2023-07-11 15:34:16,359 INFO [Listener at localhost/36775] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:16,359 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:16,360 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:16,360 INFO [Listener at localhost/36775] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,39531,1689089654144' ***** 2023-07-11 15:34:16,360 INFO [Listener at localhost/36775] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 15:34:16,361 INFO [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:16,365 INFO [RS:0;jenkins-hbase9:41645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@10fa6db4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:16,365 INFO [RS:2;jenkins-hbase9:32969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3b91e364{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:16,365 INFO [RS:3;jenkins-hbase9:39531] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7039aa30{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:16,365 INFO [RS:1;jenkins-hbase9:35817] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6d1c7201{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 15:34:16,365 INFO [RS:0;jenkins-hbase9:41645] server.AbstractConnector(383): Stopped ServerConnector@2a106a9f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:16,366 INFO [RS:3;jenkins-hbase9:39531] server.AbstractConnector(383): Stopped ServerConnector@3fb94637{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:16,366 INFO [RS:0;jenkins-hbase9:41645] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:16,366 INFO [RS:2;jenkins-hbase9:32969] server.AbstractConnector(383): Stopped ServerConnector@26dd9db0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:16,366 INFO [RS:1;jenkins-hbase9:35817] server.AbstractConnector(383): Stopped ServerConnector@3b512c06{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:16,367 INFO [RS:0;jenkins-hbase9:41645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7db5db50{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:16,366 INFO [RS:3;jenkins-hbase9:39531] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:16,368 INFO [RS:0;jenkins-hbase9:41645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@226cb0e2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:16,367 INFO [RS:1;jenkins-hbase9:35817] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:16,367 INFO [RS:2;jenkins-hbase9:32969] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:16,368 INFO [RS:3;jenkins-hbase9:39531] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10eaf42a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:16,369 INFO [RS:3;jenkins-hbase9:39531] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@781925b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:16,369 INFO [RS:1;jenkins-hbase9:35817] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2f123dc1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:16,369 INFO [RS:2;jenkins-hbase9:32969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@742966bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:16,370 INFO [RS:1;jenkins-hbase9:35817] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7198b9b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:16,371 INFO [RS:2;jenkins-hbase9:32969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7745cde6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:16,369 INFO [RS:0;jenkins-hbase9:41645] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:16,371 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:16,371 INFO [RS:3;jenkins-hbase9:39531] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:16,371 INFO [RS:3;jenkins-hbase9:39531] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:16,371 INFO [RS:3;jenkins-hbase9:39531] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:16,371 INFO [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:16,371 INFO [RS:2;jenkins-hbase9:32969] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:16,372 INFO [RS:1;jenkins-hbase9:35817] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 15:34:16,372 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:16,372 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:16,371 INFO [RS:0;jenkins-hbase9:41645] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:16,372 INFO [RS:0;jenkins-hbase9:41645] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:16,372 INFO [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:16,372 DEBUG [RS:0;jenkins-hbase9:41645] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x72ba8b0e to 127.0.0.1:64295 2023-07-11 15:34:16,372 DEBUG [RS:0;jenkins-hbase9:41645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,372 INFO [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,41645,1689089652214; all regions closed. 2023-07-11 15:34:16,372 INFO [RS:1;jenkins-hbase9:35817] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:16,372 INFO [RS:2;jenkins-hbase9:32969] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 15:34:16,372 DEBUG [RS:3;jenkins-hbase9:39531] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x386cbc9e to 127.0.0.1:64295 2023-07-11 15:34:16,372 DEBUG [RS:3;jenkins-hbase9:39531] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,373 INFO [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,39531,1689089654144; all regions closed. 2023-07-11 15:34:16,371 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 15:34:16,372 INFO [RS:2;jenkins-hbase9:32969] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:16,372 INFO [RS:1;jenkins-hbase9:35817] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 15:34:16,373 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(3305): Received CLOSE for e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:16,373 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(3305): Received CLOSE for bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:16,373 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:16,373 DEBUG [RS:1;jenkins-hbase9:35817] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2f4ffe65 to 127.0.0.1:64295 2023-07-11 15:34:16,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing bec0bc3d4a60a2beb91b5784ba8a455e, disabling compactions & flushes 2023-07-11 15:34:16,373 DEBUG [RS:1;jenkins-hbase9:35817] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,373 INFO [RS:1;jenkins-hbase9:35817] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:16,373 INFO [RS:1;jenkins-hbase9:35817] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:16,373 INFO [RS:1;jenkins-hbase9:35817] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:16,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:16,373 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-11 15:34:16,373 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:16,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:16,373 DEBUG [RS:2;jenkins-hbase9:32969] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x32533615 to 127.0.0.1:64295 2023-07-11 15:34:16,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing e8c87b4aafa4a2756a9b6f91d7103fb0, disabling compactions & flushes 2023-07-11 15:34:16,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. after waiting 0 ms 2023-07-11 15:34:16,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:16,374 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing bec0bc3d4a60a2beb91b5784ba8a455e 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-11 15:34:16,374 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 15:34:16,374 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 15:34:16,374 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 15:34:16,374 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 15:34:16,374 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 15:34:16,374 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-11 15:34:16,374 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:16,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:16,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. after waiting 0 ms 2023-07-11 15:34:16,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:16,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2745): Flushing e8c87b4aafa4a2756a9b6f91d7103fb0 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-11 15:34:16,374 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-11 15:34:16,374 DEBUG [RS:2;jenkins-hbase9:32969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,376 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-11 15:34:16,376 DEBUG [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1478): Online Regions={e8c87b4aafa4a2756a9b6f91d7103fb0=hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0.} 2023-07-11 15:34:16,376 DEBUG [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1504): Waiting on e8c87b4aafa4a2756a9b6f91d7103fb0 2023-07-11 15:34:16,376 DEBUG [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1478): Online Regions={bec0bc3d4a60a2beb91b5784ba8a455e=hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e., 1588230740=hbase:meta,,1.1588230740} 2023-07-11 15:34:16,378 DEBUG [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1504): Waiting on 1588230740, bec0bc3d4a60a2beb91b5784ba8a455e 2023-07-11 15:34:16,382 DEBUG [RS:3;jenkins-hbase9:39531] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs 2023-07-11 15:34:16,382 INFO [RS:3;jenkins-hbase9:39531] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C39531%2C1689089654144:(num 1689089654473) 2023-07-11 15:34:16,382 DEBUG [RS:3;jenkins-hbase9:39531] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,382 INFO [RS:3;jenkins-hbase9:39531] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:16,383 DEBUG [RS:0;jenkins-hbase9:41645] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs 2023-07-11 15:34:16,383 INFO [RS:0;jenkins-hbase9:41645] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C41645%2C1689089652214:(num 1689089653247) 2023-07-11 15:34:16,383 DEBUG [RS:0;jenkins-hbase9:41645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,383 INFO [RS:0;jenkins-hbase9:41645] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:16,384 INFO [RS:0;jenkins-hbase9:41645] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:16,384 INFO [RS:0;jenkins-hbase9:41645] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:16,384 INFO [RS:0;jenkins-hbase9:41645] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:16,384 INFO [RS:0;jenkins-hbase9:41645] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:16,384 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:16,390 INFO [RS:0;jenkins-hbase9:41645] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:41645 2023-07-11 15:34:16,398 INFO [RS:3;jenkins-hbase9:39531] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:16,398 INFO [RS:3;jenkins-hbase9:39531] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:16,398 INFO [RS:3;jenkins-hbase9:39531] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:16,398 INFO [RS:3;jenkins-hbase9:39531] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:16,399 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:16,402 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:16,402 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:16,402 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:16,430 INFO [RS:3;jenkins-hbase9:39531] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:39531 2023-07-11 15:34:16,430 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e/.tmp/info/2d42c2701aaf486ba331e06d996435da 2023-07-11 15:34:16,433 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/.tmp/info/cbf6c6d348054d95a91046fa2ede10c2 2023-07-11 15:34:16,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0/.tmp/m/34f0568c6bb244b5ba92bfca3a23b775 2023-07-11 15:34:16,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2d42c2701aaf486ba331e06d996435da 2023-07-11 15:34:16,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e/.tmp/info/2d42c2701aaf486ba331e06d996435da as hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e/info/2d42c2701aaf486ba331e06d996435da 2023-07-11 15:34:16,447 INFO [regionserver/jenkins-hbase9:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:16,448 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34f0568c6bb244b5ba92bfca3a23b775 2023-07-11 15:34:16,448 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cbf6c6d348054d95a91046fa2ede10c2 2023-07-11 15:34:16,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0/.tmp/m/34f0568c6bb244b5ba92bfca3a23b775 as hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0/m/34f0568c6bb244b5ba92bfca3a23b775 2023-07-11 15:34:16,453 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2d42c2701aaf486ba331e06d996435da 2023-07-11 15:34:16,453 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e/info/2d42c2701aaf486ba331e06d996435da, entries=3, sequenceid=9, filesize=5.0 K 2023-07-11 15:34:16,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for bec0bc3d4a60a2beb91b5784ba8a455e in 80ms, sequenceid=9, compaction requested=false 2023-07-11 15:34:16,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34f0568c6bb244b5ba92bfca3a23b775 2023-07-11 15:34:16,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0/m/34f0568c6bb244b5ba92bfca3a23b775, entries=12, sequenceid=29, filesize=5.4 K 2023-07-11 15:34:16,459 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for e8c87b4aafa4a2756a9b6f91d7103fb0 in 84ms, sequenceid=29, compaction requested=false 2023-07-11 15:34:16,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/namespace/bec0bc3d4a60a2beb91b5784ba8a455e/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-11 15:34:16,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:16,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for bec0bc3d4a60a2beb91b5784ba8a455e: 2023-07-11 15:34:16,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689089653350.bec0bc3d4a60a2beb91b5784ba8a455e. 2023-07-11 15:34:16,466 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/.tmp/rep_barrier/e793102fec99405d983eccca9f2596aa 2023-07-11 15:34:16,473 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/rsgroup/e8c87b4aafa4a2756a9b6f91d7103fb0/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-11 15:34:16,473 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:34:16,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:16,473 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for e8c87b4aafa4a2756a9b6f91d7103fb0: 2023-07-11 15:34:16,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689089653525.e8c87b4aafa4a2756a9b6f91d7103fb0. 2023-07-11 15:34:16,475 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e793102fec99405d983eccca9f2596aa 2023-07-11 15:34:16,475 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:16,475 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:16,475 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:16,475 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:16,475 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:16,475 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:16,476 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:16,476 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:16,476 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,39531,1689089654144 2023-07-11 15:34:16,475 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:16,476 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:16,476 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:16,476 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,41645,1689089652214 2023-07-11 15:34:16,484 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/.tmp/table/15c59591460d41e1895985836d6dfabd 2023-07-11 15:34:16,489 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15c59591460d41e1895985836d6dfabd 2023-07-11 15:34:16,489 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/.tmp/info/cbf6c6d348054d95a91046fa2ede10c2 as hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/info/cbf6c6d348054d95a91046fa2ede10c2 2023-07-11 15:34:16,494 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cbf6c6d348054d95a91046fa2ede10c2 2023-07-11 15:34:16,494 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/info/cbf6c6d348054d95a91046fa2ede10c2, entries=22, sequenceid=26, filesize=7.3 K 2023-07-11 15:34:16,495 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/.tmp/rep_barrier/e793102fec99405d983eccca9f2596aa as hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/rep_barrier/e793102fec99405d983eccca9f2596aa 2023-07-11 15:34:16,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e793102fec99405d983eccca9f2596aa 2023-07-11 15:34:16,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/rep_barrier/e793102fec99405d983eccca9f2596aa, entries=1, sequenceid=26, filesize=4.9 K 2023-07-11 15:34:16,503 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/.tmp/table/15c59591460d41e1895985836d6dfabd as hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/table/15c59591460d41e1895985836d6dfabd 2023-07-11 15:34:16,509 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15c59591460d41e1895985836d6dfabd 2023-07-11 15:34:16,509 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HStore(1080): Added hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/table/15c59591460d41e1895985836d6dfabd, entries=6, sequenceid=26, filesize=5.1 K 2023-07-11 15:34:16,510 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 136ms, sequenceid=26, compaction requested=false 2023-07-11 15:34:16,520 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-11 15:34:16,521 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 15:34:16,521 INFO [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 15:34:16,521 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 15:34:16,521 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase9:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-11 15:34:16,575 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,41645,1689089652214] 2023-07-11 15:34:16,575 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,41645,1689089652214; numProcessing=1 2023-07-11 15:34:16,576 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,32969,1689089652518; all regions closed. 2023-07-11 15:34:16,577 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,41645,1689089652214 already deleted, retry=false 2023-07-11 15:34:16,577 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,41645,1689089652214 expired; onlineServers=3 2023-07-11 15:34:16,577 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,39531,1689089654144] 2023-07-11 15:34:16,577 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,39531,1689089654144; numProcessing=2 2023-07-11 15:34:16,578 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,35817,1689089652358; all regions closed. 2023-07-11 15:34:16,578 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,39531,1689089654144 already deleted, retry=false 2023-07-11 15:34:16,578 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,39531,1689089654144 expired; onlineServers=2 2023-07-11 15:34:16,583 DEBUG [RS:2;jenkins-hbase9:32969] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs 2023-07-11 15:34:16,584 INFO [RS:2;jenkins-hbase9:32969] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C32969%2C1689089652518:(num 1689089653246) 2023-07-11 15:34:16,584 DEBUG [RS:2;jenkins-hbase9:32969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,585 INFO [RS:2;jenkins-hbase9:32969] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:16,585 INFO [RS:2;jenkins-hbase9:32969] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:16,585 INFO [RS:2;jenkins-hbase9:32969] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 15:34:16,585 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:16,585 INFO [RS:2;jenkins-hbase9:32969] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 15:34:16,585 INFO [RS:2;jenkins-hbase9:32969] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 15:34:16,586 INFO [RS:2;jenkins-hbase9:32969] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:32969 2023-07-11 15:34:16,588 DEBUG [RS:1;jenkins-hbase9:35817] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs 2023-07-11 15:34:16,588 INFO [RS:1;jenkins-hbase9:35817] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C35817%2C1689089652358.meta:.meta(num 1689089653295) 2023-07-11 15:34:16,588 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:16,588 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:16,589 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,32969,1689089652518] 2023-07-11 15:34:16,589 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,32969,1689089652518; numProcessing=3 2023-07-11 15:34:16,591 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,32969,1689089652518 already deleted, retry=false 2023-07-11 15:34:16,592 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,32969,1689089652518 expired; onlineServers=1 2023-07-11 15:34:16,592 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,32969,1689089652518 2023-07-11 15:34:16,593 DEBUG [RS:1;jenkins-hbase9:35817] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/oldWALs 2023-07-11 15:34:16,593 INFO [RS:1;jenkins-hbase9:35817] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase9.apache.org%2C35817%2C1689089652358:(num 1689089653248) 2023-07-11 15:34:16,593 DEBUG [RS:1;jenkins-hbase9:35817] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,593 INFO [RS:1;jenkins-hbase9:35817] regionserver.LeaseManager(133): Closed leases 2023-07-11 15:34:16,593 INFO [RS:1;jenkins-hbase9:35817] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase9:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 15:34:16,594 INFO [regionserver/jenkins-hbase9:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:16,594 INFO [RS:1;jenkins-hbase9:35817] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:35817 2023-07-11 15:34:16,694 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:16,694 INFO [RS:2;jenkins-hbase9:32969] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,32969,1689089652518; zookeeper connection closed. 2023-07-11 15:34:16,694 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:32969-0x10154f778bf0003, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:16,694 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@37a655cd] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@37a655cd 2023-07-11 15:34:16,695 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase9.apache.org,35817,1689089652358 2023-07-11 15:34:16,695 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 15:34:16,696 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase9.apache.org,35817,1689089652358] 2023-07-11 15:34:16,696 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase9.apache.org,35817,1689089652358; numProcessing=4 2023-07-11 15:34:16,697 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase9.apache.org,35817,1689089652358 already deleted, retry=false 2023-07-11 15:34:16,697 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase9.apache.org,35817,1689089652358 expired; onlineServers=0 2023-07-11 15:34:16,697 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase9.apache.org,37033,1689089652021' ***** 2023-07-11 15:34:16,697 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-11 15:34:16,698 DEBUG [M:0;jenkins-hbase9:37033] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c3a3843, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase9.apache.org/172.31.2.10:0 2023-07-11 15:34:16,698 INFO [M:0;jenkins-hbase9:37033] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 15:34:16,701 INFO [M:0;jenkins-hbase9:37033] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@419003a3{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 15:34:16,701 INFO [M:0;jenkins-hbase9:37033] server.AbstractConnector(383): Stopped ServerConnector@26366166{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:16,702 INFO [M:0;jenkins-hbase9:37033] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 15:34:16,702 INFO [M:0;jenkins-hbase9:37033] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ec943da{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 15:34:16,702 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-11 15:34:16,703 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 15:34:16,703 INFO [M:0;jenkins-hbase9:37033] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@24dba74e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/hadoop.log.dir/,STOPPED} 2023-07-11 15:34:16,703 INFO [M:0;jenkins-hbase9:37033] regionserver.HRegionServer(1144): stopping server jenkins-hbase9.apache.org,37033,1689089652021 2023-07-11 15:34:16,703 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 15:34:16,703 INFO [M:0;jenkins-hbase9:37033] regionserver.HRegionServer(1170): stopping server jenkins-hbase9.apache.org,37033,1689089652021; all regions closed. 2023-07-11 15:34:16,703 DEBUG [M:0;jenkins-hbase9:37033] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 15:34:16,703 INFO [M:0;jenkins-hbase9:37033] master.HMaster(1491): Stopping master jetty server 2023-07-11 15:34:16,704 INFO [M:0;jenkins-hbase9:37033] server.AbstractConnector(383): Stopped ServerConnector@68c6f068{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 15:34:16,704 DEBUG [M:0;jenkins-hbase9:37033] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-11 15:34:16,704 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-11 15:34:16,704 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089652926] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.small.0-1689089652926,5,FailOnTimeoutGroup] 2023-07-11 15:34:16,704 DEBUG [master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089652925] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase9:0:becomeActiveMaster-HFileCleaner.large.0-1689089652925,5,FailOnTimeoutGroup] 2023-07-11 15:34:16,704 DEBUG [M:0;jenkins-hbase9:37033] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-11 15:34:16,705 INFO [M:0;jenkins-hbase9:37033] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-11 15:34:16,705 INFO [M:0;jenkins-hbase9:37033] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-11 15:34:16,705 INFO [M:0;jenkins-hbase9:37033] hbase.ChoreService(369): Chore service for: master/jenkins-hbase9:0 had [] on shutdown 2023-07-11 15:34:16,705 DEBUG [M:0;jenkins-hbase9:37033] master.HMaster(1512): Stopping service threads 2023-07-11 15:34:16,705 INFO [M:0;jenkins-hbase9:37033] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-11 15:34:16,705 ERROR [M:0;jenkins-hbase9:37033] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-11 15:34:16,705 INFO [M:0;jenkins-hbase9:37033] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-11 15:34:16,705 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-11 15:34:16,705 DEBUG [M:0;jenkins-hbase9:37033] zookeeper.ZKUtil(398): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-11 15:34:16,705 WARN [M:0;jenkins-hbase9:37033] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-11 15:34:16,705 INFO [M:0;jenkins-hbase9:37033] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-11 15:34:16,706 INFO [M:0;jenkins-hbase9:37033] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-11 15:34:16,706 DEBUG [M:0;jenkins-hbase9:37033] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 15:34:16,706 INFO [M:0;jenkins-hbase9:37033] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:16,706 DEBUG [M:0;jenkins-hbase9:37033] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:16,706 DEBUG [M:0;jenkins-hbase9:37033] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 15:34:16,706 DEBUG [M:0;jenkins-hbase9:37033] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:16,706 INFO [M:0;jenkins-hbase9:37033] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.22 KB heapSize=90.66 KB 2023-07-11 15:34:16,717 INFO [M:0;jenkins-hbase9:37033] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.22 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f897c9ec99d54bcebcc8d11c7566c113 2023-07-11 15:34:16,722 DEBUG [M:0;jenkins-hbase9:37033] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f897c9ec99d54bcebcc8d11c7566c113 as hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f897c9ec99d54bcebcc8d11c7566c113 2023-07-11 15:34:16,726 INFO [M:0;jenkins-hbase9:37033] regionserver.HStore(1080): Added hdfs://localhost:44357/user/jenkins/test-data/c1520e66-05c2-92bd-4443-a0346034ff74/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f897c9ec99d54bcebcc8d11c7566c113, entries=22, sequenceid=175, filesize=11.1 K 2023-07-11 15:34:16,727 INFO [M:0;jenkins-hbase9:37033] regionserver.HRegion(2948): Finished flush of dataSize ~76.22 KB/78053, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=175, compaction requested=false 2023-07-11 15:34:16,728 INFO [M:0;jenkins-hbase9:37033] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 15:34:16,729 DEBUG [M:0;jenkins-hbase9:37033] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 15:34:16,731 INFO [M:0;jenkins-hbase9:37033] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-11 15:34:16,731 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 15:34:16,732 INFO [M:0;jenkins-hbase9:37033] ipc.NettyRpcServer(158): Stopping server on /172.31.2.10:37033 2023-07-11 15:34:16,733 DEBUG [M:0;jenkins-hbase9:37033] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase9.apache.org,37033,1689089652021 already deleted, retry=false 2023-07-11 15:34:16,858 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:16,858 INFO [M:0;jenkins-hbase9:37033] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,37033,1689089652021; zookeeper connection closed. 2023-07-11 15:34:16,858 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): master:37033-0x10154f778bf0000, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:16,958 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:16,958 INFO [RS:1;jenkins-hbase9:35817] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,35817,1689089652358; zookeeper connection closed. 2023-07-11 15:34:16,958 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:35817-0x10154f778bf0002, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:16,958 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@28bea971] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@28bea971 2023-07-11 15:34:17,058 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:17,058 INFO [RS:0;jenkins-hbase9:41645] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,41645,1689089652214; zookeeper connection closed. 2023-07-11 15:34:17,058 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:41645-0x10154f778bf0001, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:17,060 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@52e2230f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@52e2230f 2023-07-11 15:34:17,158 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:17,158 INFO [RS:3;jenkins-hbase9:39531] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase9.apache.org,39531,1689089654144; zookeeper connection closed. 2023-07-11 15:34:17,158 DEBUG [Listener at localhost/36775-EventThread] zookeeper.ZKWatcher(600): regionserver:39531-0x10154f778bf000b, quorum=127.0.0.1:64295, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 15:34:17,159 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@704c57ca] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@704c57ca 2023-07-11 15:34:17,159 INFO [Listener at localhost/36775] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-11 15:34:17,159 WARN [Listener at localhost/36775] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 15:34:17,162 INFO [Listener at localhost/36775] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:17,265 WARN [BP-1044115924-172.31.2.10-1689089651275 heartbeating to localhost/127.0.0.1:44357] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 15:34:17,265 WARN [BP-1044115924-172.31.2.10-1689089651275 heartbeating to localhost/127.0.0.1:44357] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1044115924-172.31.2.10-1689089651275 (Datanode Uuid 076ab4ec-f86c-4c6c-b01c-45a21cb89d72) service to localhost/127.0.0.1:44357 2023-07-11 15:34:17,265 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data5/current/BP-1044115924-172.31.2.10-1689089651275] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:17,266 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data6/current/BP-1044115924-172.31.2.10-1689089651275] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:17,267 WARN [Listener at localhost/36775] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 15:34:17,269 INFO [Listener at localhost/36775] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:17,371 WARN [BP-1044115924-172.31.2.10-1689089651275 heartbeating to localhost/127.0.0.1:44357] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 15:34:17,371 WARN [BP-1044115924-172.31.2.10-1689089651275 heartbeating to localhost/127.0.0.1:44357] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1044115924-172.31.2.10-1689089651275 (Datanode Uuid c056f7ec-ec77-4f1a-b321-8846c4309192) service to localhost/127.0.0.1:44357 2023-07-11 15:34:17,372 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data3/current/BP-1044115924-172.31.2.10-1689089651275] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:17,372 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data4/current/BP-1044115924-172.31.2.10-1689089651275] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:17,373 WARN [Listener at localhost/36775] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 15:34:17,376 INFO [Listener at localhost/36775] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:17,478 WARN [BP-1044115924-172.31.2.10-1689089651275 heartbeating to localhost/127.0.0.1:44357] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 15:34:17,478 WARN [BP-1044115924-172.31.2.10-1689089651275 heartbeating to localhost/127.0.0.1:44357] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1044115924-172.31.2.10-1689089651275 (Datanode Uuid 5af8bd8e-0d89-46b4-bd9e-33fcd8819c87) service to localhost/127.0.0.1:44357 2023-07-11 15:34:17,479 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data1/current/BP-1044115924-172.31.2.10-1689089651275] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:17,479 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3a202c2b-3a9a-ea61-639d-f38fc00b49ed/cluster_85cd8779-962b-56dd-e699-0226f42722ba/dfs/data/data2/current/BP-1044115924-172.31.2.10-1689089651275] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 15:34:17,488 INFO [Listener at localhost/36775] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 15:34:17,601 INFO [Listener at localhost/36775] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-11 15:34:17,626 INFO [Listener at localhost/36775] hbase.HBaseTestingUtility(1293): Minicluster is down