2023-07-18 10:14:25,028 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96 2023-07-18 10:14:25,048 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-18 10:14:25,065 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 10:14:25,065 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba, deleteOnExit=true 2023-07-18 10:14:25,066 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 10:14:25,066 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/test.cache.data in system properties and HBase conf 2023-07-18 10:14:25,067 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 10:14:25,067 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir in system properties and HBase conf 2023-07-18 10:14:25,068 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 10:14:25,068 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 10:14:25,068 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 10:14:25,199 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-18 10:14:25,607 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 10:14:25,611 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 10:14:25,612 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 10:14:25,612 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 10:14:25,612 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 10:14:25,613 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 10:14:25,613 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 10:14:25,614 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 10:14:25,614 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 10:14:25,614 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 10:14:25,615 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/nfs.dump.dir in system properties and HBase conf 2023-07-18 10:14:25,615 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir in system properties and HBase conf 2023-07-18 10:14:25,615 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 10:14:25,615 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 10:14:25,616 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 10:14:26,162 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 10:14:26,166 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 10:14:26,445 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-18 10:14:26,638 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-18 10:14:26,656 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:14:26,698 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:14:26,729 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir/Jetty_localhost_43493_hdfs____q93aqh/webapp 2023-07-18 10:14:26,884 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43493 2023-07-18 10:14:26,896 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 10:14:26,897 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 10:14:27,425 WARN [Listener at localhost/38869] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:14:27,495 WARN [Listener at localhost/38869] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 10:14:27,513 WARN [Listener at localhost/38869] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:14:27,520 INFO [Listener at localhost/38869] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:14:27,526 INFO [Listener at localhost/38869] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir/Jetty_localhost_37507_datanode____mcl0qn/webapp 2023-07-18 10:14:27,630 INFO [Listener at localhost/38869] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37507 2023-07-18 10:14:28,022 WARN [Listener at localhost/46239] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:14:28,033 WARN [Listener at localhost/46239] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 10:14:28,038 WARN [Listener at localhost/46239] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:14:28,040 INFO [Listener at localhost/46239] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:14:28,044 INFO [Listener at localhost/46239] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir/Jetty_localhost_43363_datanode____.247l8o/webapp 2023-07-18 10:14:28,151 INFO [Listener at localhost/46239] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43363 2023-07-18 10:14:28,163 WARN [Listener at localhost/33813] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:14:28,179 WARN [Listener at localhost/33813] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 10:14:28,183 WARN [Listener at localhost/33813] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:14:28,185 INFO [Listener at localhost/33813] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:14:28,192 INFO [Listener at localhost/33813] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir/Jetty_localhost_42719_datanode____.nbdeew/webapp 2023-07-18 10:14:28,317 INFO [Listener at localhost/33813] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42719 2023-07-18 10:14:28,328 WARN [Listener at localhost/45689] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:14:28,569 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x69e8782854522144: Processing first storage report for DS-0174ddba-b045-40fa-862f-a107e2de6134 from datanode e1f54303-a893-42d2-840d-6c8ceb04f86c 2023-07-18 10:14:28,570 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x69e8782854522144: from storage DS-0174ddba-b045-40fa-862f-a107e2de6134 node DatanodeRegistration(127.0.0.1:39177, datanodeUuid=e1f54303-a893-42d2-840d-6c8ceb04f86c, infoPort=37799, infoSecurePort=0, ipcPort=33813, storageInfo=lv=-57;cid=testClusterID;nsid=83957340;c=1689675266234), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 10:14:28,570 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xda590893411e3a85: Processing first storage report for DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f from datanode 747d601c-3feb-4b95-918b-50fbb899c0cd 2023-07-18 10:14:28,570 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xda590893411e3a85: from storage DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f node DatanodeRegistration(127.0.0.1:33197, datanodeUuid=747d601c-3feb-4b95-918b-50fbb899c0cd, infoPort=36845, infoSecurePort=0, ipcPort=46239, storageInfo=lv=-57;cid=testClusterID;nsid=83957340;c=1689675266234), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:28,570 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x83d4c03f965e7a02: Processing first storage report for DS-f19a9f53-99d6-4507-a0b5-5709798563f1 from datanode 30525c1c-db5d-4f26-a1c7-6faee95ac827 2023-07-18 10:14:28,571 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x83d4c03f965e7a02: from storage DS-f19a9f53-99d6-4507-a0b5-5709798563f1 node DatanodeRegistration(127.0.0.1:44091, datanodeUuid=30525c1c-db5d-4f26-a1c7-6faee95ac827, infoPort=34309, infoSecurePort=0, ipcPort=45689, storageInfo=lv=-57;cid=testClusterID;nsid=83957340;c=1689675266234), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:28,571 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x69e8782854522144: Processing first storage report for DS-5298cb29-8dc4-482d-9283-a8b778e61001 from datanode e1f54303-a893-42d2-840d-6c8ceb04f86c 2023-07-18 10:14:28,571 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x69e8782854522144: from storage DS-5298cb29-8dc4-482d-9283-a8b778e61001 node DatanodeRegistration(127.0.0.1:39177, datanodeUuid=e1f54303-a893-42d2-840d-6c8ceb04f86c, infoPort=37799, infoSecurePort=0, ipcPort=33813, storageInfo=lv=-57;cid=testClusterID;nsid=83957340;c=1689675266234), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:28,571 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xda590893411e3a85: Processing first storage report for DS-5321e27d-d741-40c8-82ed-388c54f5e41d from datanode 747d601c-3feb-4b95-918b-50fbb899c0cd 2023-07-18 10:14:28,571 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xda590893411e3a85: from storage DS-5321e27d-d741-40c8-82ed-388c54f5e41d node DatanodeRegistration(127.0.0.1:33197, datanodeUuid=747d601c-3feb-4b95-918b-50fbb899c0cd, infoPort=36845, infoSecurePort=0, ipcPort=46239, storageInfo=lv=-57;cid=testClusterID;nsid=83957340;c=1689675266234), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:28,571 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x83d4c03f965e7a02: Processing first storage report for DS-6fd65542-22b4-4a54-b052-75c3fb549143 from datanode 30525c1c-db5d-4f26-a1c7-6faee95ac827 2023-07-18 10:14:28,571 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x83d4c03f965e7a02: from storage DS-6fd65542-22b4-4a54-b052-75c3fb549143 node DatanodeRegistration(127.0.0.1:44091, datanodeUuid=30525c1c-db5d-4f26-a1c7-6faee95ac827, infoPort=34309, infoSecurePort=0, ipcPort=45689, storageInfo=lv=-57;cid=testClusterID;nsid=83957340;c=1689675266234), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:28,745 DEBUG [Listener at localhost/45689] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96 2023-07-18 10:14:28,891 INFO [Listener at localhost/45689] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/zookeeper_0, clientPort=53154, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 10:14:28,912 INFO [Listener at localhost/45689] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53154 2023-07-18 10:14:28,925 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:28,928 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:29,586 INFO [Listener at localhost/45689] util.FSUtils(471): Created version file at hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796 with version=8 2023-07-18 10:14:29,587 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/hbase-staging 2023-07-18 10:14:29,596 DEBUG [Listener at localhost/45689] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 10:14:29,596 DEBUG [Listener at localhost/45689] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 10:14:29,596 DEBUG [Listener at localhost/45689] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 10:14:29,596 DEBUG [Listener at localhost/45689] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 10:14:29,965 INFO [Listener at localhost/45689] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-18 10:14:30,710 INFO [Listener at localhost/45689] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:14:30,759 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:30,759 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:30,760 INFO [Listener at localhost/45689] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:14:30,760 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:30,760 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:14:30,924 INFO [Listener at localhost/45689] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:14:31,000 DEBUG [Listener at localhost/45689] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-18 10:14:31,104 INFO [Listener at localhost/45689] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42907 2023-07-18 10:14:31,118 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:31,120 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:31,147 INFO [Listener at localhost/45689] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42907 connecting to ZooKeeper ensemble=127.0.0.1:53154 2023-07-18 10:14:31,192 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:429070x0, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:14:31,195 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42907-0x10177ed05f80000 connected 2023-07-18 10:14:31,223 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:14:31,223 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:14:31,228 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:14:31,236 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42907 2023-07-18 10:14:31,237 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42907 2023-07-18 10:14:31,238 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42907 2023-07-18 10:14:31,238 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42907 2023-07-18 10:14:31,239 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42907 2023-07-18 10:14:31,273 INFO [Listener at localhost/45689] log.Log(170): Logging initialized @6985ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-18 10:14:31,398 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:14:31,399 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:14:31,400 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:14:31,401 INFO [Listener at localhost/45689] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 10:14:31,401 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:14:31,402 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:14:31,405 INFO [Listener at localhost/45689] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:14:31,461 INFO [Listener at localhost/45689] http.HttpServer(1146): Jetty bound to port 39059 2023-07-18 10:14:31,462 INFO [Listener at localhost/45689] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:14:31,490 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:31,493 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7936602a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:14:31,494 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:31,494 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@390c5cdd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:14:31,692 INFO [Listener at localhost/45689] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:14:31,704 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:14:31,704 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:14:31,706 INFO [Listener at localhost/45689] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 10:14:31,713 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:31,743 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@436587aa{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir/jetty-0_0_0_0-39059-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7657339892883737628/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 10:14:31,758 INFO [Listener at localhost/45689] server.AbstractConnector(333): Started ServerConnector@1f13a933{HTTP/1.1, (http/1.1)}{0.0.0.0:39059} 2023-07-18 10:14:31,759 INFO [Listener at localhost/45689] server.Server(415): Started @7470ms 2023-07-18 10:14:31,763 INFO [Listener at localhost/45689] master.HMaster(444): hbase.rootdir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796, hbase.cluster.distributed=false 2023-07-18 10:14:31,846 INFO [Listener at localhost/45689] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:14:31,846 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:31,847 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:31,847 INFO [Listener at localhost/45689] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:14:31,847 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:31,847 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:14:31,853 INFO [Listener at localhost/45689] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:14:31,857 INFO [Listener at localhost/45689] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42163 2023-07-18 10:14:31,860 INFO [Listener at localhost/45689] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:14:31,868 DEBUG [Listener at localhost/45689] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:14:31,869 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:31,871 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:31,873 INFO [Listener at localhost/45689] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42163 connecting to ZooKeeper ensemble=127.0.0.1:53154 2023-07-18 10:14:31,877 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:421630x0, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:14:31,878 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42163-0x10177ed05f80001 connected 2023-07-18 10:14:31,878 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:14:31,880 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:14:31,881 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:14:31,881 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42163 2023-07-18 10:14:31,882 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42163 2023-07-18 10:14:31,882 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42163 2023-07-18 10:14:31,882 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42163 2023-07-18 10:14:31,883 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42163 2023-07-18 10:14:31,885 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:14:31,885 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:14:31,885 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:14:31,886 INFO [Listener at localhost/45689] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:14:31,887 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:14:31,887 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:14:31,887 INFO [Listener at localhost/45689] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:14:31,889 INFO [Listener at localhost/45689] http.HttpServer(1146): Jetty bound to port 39395 2023-07-18 10:14:31,890 INFO [Listener at localhost/45689] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:14:31,895 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:31,895 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f5b6c1a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:14:31,896 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:31,897 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@59ec68ba{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:14:32,023 INFO [Listener at localhost/45689] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:14:32,025 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:14:32,026 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:14:32,026 INFO [Listener at localhost/45689] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 10:14:32,028 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:32,032 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4741679e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir/jetty-0_0_0_0-39395-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2516117430477913908/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:14:32,034 INFO [Listener at localhost/45689] server.AbstractConnector(333): Started ServerConnector@1eb5cca9{HTTP/1.1, (http/1.1)}{0.0.0.0:39395} 2023-07-18 10:14:32,034 INFO [Listener at localhost/45689] server.Server(415): Started @7745ms 2023-07-18 10:14:32,049 INFO [Listener at localhost/45689] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:14:32,049 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:32,049 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:32,050 INFO [Listener at localhost/45689] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:14:32,050 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:32,050 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:14:32,051 INFO [Listener at localhost/45689] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:14:32,053 INFO [Listener at localhost/45689] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40033 2023-07-18 10:14:32,054 INFO [Listener at localhost/45689] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:14:32,058 DEBUG [Listener at localhost/45689] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:14:32,059 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:32,061 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:32,062 INFO [Listener at localhost/45689] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40033 connecting to ZooKeeper ensemble=127.0.0.1:53154 2023-07-18 10:14:32,067 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:400330x0, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:14:32,069 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:400330x0, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:14:32,070 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:400330x0, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:14:32,071 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:400330x0, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:14:32,075 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40033-0x10177ed05f80002 connected 2023-07-18 10:14:32,081 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40033 2023-07-18 10:14:32,082 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40033 2023-07-18 10:14:32,090 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40033 2023-07-18 10:14:32,098 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40033 2023-07-18 10:14:32,099 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40033 2023-07-18 10:14:32,102 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:14:32,103 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:14:32,103 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:14:32,104 INFO [Listener at localhost/45689] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:14:32,104 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:14:32,104 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:14:32,105 INFO [Listener at localhost/45689] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:14:32,106 INFO [Listener at localhost/45689] http.HttpServer(1146): Jetty bound to port 41571 2023-07-18 10:14:32,106 INFO [Listener at localhost/45689] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:14:32,145 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:32,146 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7668a9a6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:14:32,146 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:32,146 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2a84da01{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:14:32,332 INFO [Listener at localhost/45689] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:14:32,333 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:14:32,333 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:14:32,333 INFO [Listener at localhost/45689] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 10:14:32,334 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:32,335 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@296c8231{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir/jetty-0_0_0_0-41571-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7390032209574986/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:14:32,336 INFO [Listener at localhost/45689] server.AbstractConnector(333): Started ServerConnector@5b9ad7c5{HTTP/1.1, (http/1.1)}{0.0.0.0:41571} 2023-07-18 10:14:32,336 INFO [Listener at localhost/45689] server.Server(415): Started @8048ms 2023-07-18 10:14:32,349 INFO [Listener at localhost/45689] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:14:32,349 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:32,349 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:32,349 INFO [Listener at localhost/45689] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:14:32,349 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:32,350 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:14:32,350 INFO [Listener at localhost/45689] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:14:32,351 INFO [Listener at localhost/45689] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40931 2023-07-18 10:14:32,352 INFO [Listener at localhost/45689] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:14:32,353 DEBUG [Listener at localhost/45689] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:14:32,354 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:32,355 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:32,356 INFO [Listener at localhost/45689] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40931 connecting to ZooKeeper ensemble=127.0.0.1:53154 2023-07-18 10:14:32,360 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:409310x0, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:14:32,362 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40931-0x10177ed05f80003 connected 2023-07-18 10:14:32,362 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:14:32,363 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:14:32,364 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:14:32,365 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40931 2023-07-18 10:14:32,365 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40931 2023-07-18 10:14:32,367 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40931 2023-07-18 10:14:32,370 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40931 2023-07-18 10:14:32,371 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40931 2023-07-18 10:14:32,373 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:14:32,374 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:14:32,374 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:14:32,375 INFO [Listener at localhost/45689] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:14:32,375 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:14:32,375 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:14:32,375 INFO [Listener at localhost/45689] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:14:32,376 INFO [Listener at localhost/45689] http.HttpServer(1146): Jetty bound to port 35655 2023-07-18 10:14:32,376 INFO [Listener at localhost/45689] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:14:32,377 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:32,378 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3c064514{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:14:32,378 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:32,379 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4025456f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:14:32,527 INFO [Listener at localhost/45689] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:14:32,528 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:14:32,528 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:14:32,529 INFO [Listener at localhost/45689] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 10:14:32,530 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:32,531 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6bae5329{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir/jetty-0_0_0_0-35655-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1868045098869101268/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:14:32,532 INFO [Listener at localhost/45689] server.AbstractConnector(333): Started ServerConnector@341f201f{HTTP/1.1, (http/1.1)}{0.0.0.0:35655} 2023-07-18 10:14:32,532 INFO [Listener at localhost/45689] server.Server(415): Started @8244ms 2023-07-18 10:14:32,538 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:14:32,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@6e640ae4{HTTP/1.1, (http/1.1)}{0.0.0.0:41927} 2023-07-18 10:14:32,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8253ms 2023-07-18 10:14:32,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:32,553 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 10:14:32,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:32,575 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:14:32,575 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:14:32,575 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:14:32,575 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:14:32,576 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:32,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 10:14:32,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42907,1689675269765 from backup master directory 2023-07-18 10:14:32,579 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 10:14:32,583 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:32,583 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 10:14:32,584 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:14:32,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:32,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-18 10:14:32,591 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-18 10:14:32,707 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/hbase.id with ID: b6e04fb6-9321-429c-8da0-022bc4479b58 2023-07-18 10:14:32,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:32,779 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:32,862 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2ffb11a2 to 127.0.0.1:53154 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:14:32,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6e24ef51, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:14:32,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:32,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 10:14:32,965 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-18 10:14:32,965 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-18 10:14:32,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 10:14:32,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 10:14:32,974 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:14:33,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/data/master/store-tmp 2023-07-18 10:14:33,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:33,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 10:14:33,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:14:33,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:14:33,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 10:14:33,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:14:33,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:14:33,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 10:14:33,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/WALs/jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:33,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42907%2C1689675269765, suffix=, logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/WALs/jenkins-hbase4.apache.org,42907,1689675269765, archiveDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/oldWALs, maxLogs=10 2023-07-18 10:14:33,156 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK] 2023-07-18 10:14:33,156 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK] 2023-07-18 10:14:33,156 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK] 2023-07-18 10:14:33,166 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-18 10:14:33,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/WALs/jenkins-hbase4.apache.org,42907,1689675269765/jenkins-hbase4.apache.org%2C42907%2C1689675269765.1689675273099 2023-07-18 10:14:33,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK], DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK], DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK]] 2023-07-18 10:14:33,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:33,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:33,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:14:33,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:14:33,330 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:14:33,337 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 10:14:33,366 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 10:14:33,378 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:33,384 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:14:33,385 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:14:33,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:14:33,404 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:33,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10347670560, jitterRate=-0.036298081278800964}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:33,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 10:14:33,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 10:14:33,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 10:14:33,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 10:14:33,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 10:14:33,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-18 10:14:33,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 40 msec 2023-07-18 10:14:33,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 10:14:33,508 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 10:14:33,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 10:14:33,524 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 10:14:33,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 10:14:33,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 10:14:33,543 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:33,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 10:14:33,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 10:14:33,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 10:14:33,570 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:14:33,570 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:14:33,570 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:14:33,570 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:14:33,570 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:33,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42907,1689675269765, sessionid=0x10177ed05f80000, setting cluster-up flag (Was=false) 2023-07-18 10:14:33,590 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:33,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 10:14:33,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:33,604 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:33,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 10:14:33,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:33,617 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.hbase-snapshot/.tmp 2023-07-18 10:14:33,638 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(951): ClusterId : b6e04fb6-9321-429c-8da0-022bc4479b58 2023-07-18 10:14:33,652 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(951): ClusterId : b6e04fb6-9321-429c-8da0-022bc4479b58 2023-07-18 10:14:33,662 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(951): ClusterId : b6e04fb6-9321-429c-8da0-022bc4479b58 2023-07-18 10:14:33,672 DEBUG [RS:1;jenkins-hbase4:40033] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:14:33,672 DEBUG [RS:2;jenkins-hbase4:40931] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:14:33,672 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:14:33,685 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:14:33,685 DEBUG [RS:2;jenkins-hbase4:40931] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:14:33,685 DEBUG [RS:1;jenkins-hbase4:40033] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:14:33,685 DEBUG [RS:2;jenkins-hbase4:40931] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:14:33,685 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:14:33,685 DEBUG [RS:1;jenkins-hbase4:40033] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:14:33,689 DEBUG [RS:2;jenkins-hbase4:40931] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:14:33,689 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:14:33,689 DEBUG [RS:1;jenkins-hbase4:40033] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:14:33,692 DEBUG [RS:1;jenkins-hbase4:40033] zookeeper.ReadOnlyZKClient(139): Connect 0x4201edaf to 127.0.0.1:53154 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:14:33,692 DEBUG [RS:2;jenkins-hbase4:40931] zookeeper.ReadOnlyZKClient(139): Connect 0x6dc3e660 to 127.0.0.1:53154 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:14:33,692 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ReadOnlyZKClient(139): Connect 0x3afe5354 to 127.0.0.1:53154 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:14:33,753 DEBUG [RS:0;jenkins-hbase4:42163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26526714, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:14:33,754 DEBUG [RS:2;jenkins-hbase4:40931] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28c175bc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:14:33,754 DEBUG [RS:0;jenkins-hbase4:42163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62d905fe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:14:33,754 DEBUG [RS:2;jenkins-hbase4:40931] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13293eb9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:14:33,755 DEBUG [RS:1;jenkins-hbase4:40033] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3fc6997b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:14:33,755 DEBUG [RS:1;jenkins-hbase4:40033] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@452655dc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:14:33,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 10:14:33,780 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42163 2023-07-18 10:14:33,782 DEBUG [RS:1;jenkins-hbase4:40033] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40033 2023-07-18 10:14:33,783 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:40931 2023-07-18 10:14:33,786 INFO [RS:1;jenkins-hbase4:40033] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:14:33,786 INFO [RS:2;jenkins-hbase4:40931] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:14:33,786 INFO [RS:0;jenkins-hbase4:42163] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:14:33,787 INFO [RS:0;jenkins-hbase4:42163] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:14:33,787 INFO [RS:2;jenkins-hbase4:40931] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:14:33,787 INFO [RS:1;jenkins-hbase4:40033] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:14:33,787 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:14:33,787 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:14:33,788 DEBUG [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:14:33,791 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:42163, startcode=1689675271845 2023-07-18 10:14:33,791 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:40033, startcode=1689675272048 2023-07-18 10:14:33,791 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:40931, startcode=1689675272348 2023-07-18 10:14:33,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 10:14:33,794 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:14:33,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 10:14:33,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 10:14:33,815 DEBUG [RS:1;jenkins-hbase4:40033] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:14:33,815 DEBUG [RS:0;jenkins-hbase4:42163] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:14:33,815 DEBUG [RS:2;jenkins-hbase4:40931] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:14:33,882 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54715, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:14:33,883 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46071, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:14:33,882 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60471, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:14:33,893 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:33,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 10:14:33,905 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:33,906 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:33,944 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 10:14:33,945 WARN [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 10:14:33,945 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 10:14:33,945 WARN [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 10:14:33,945 DEBUG [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 10:14:33,945 WARN [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 10:14:33,963 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 10:14:33,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 10:14:33,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 10:14:33,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 10:14:33,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:14:33,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:14:33,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:14:33,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:14:33,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 10:14:33,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:33,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:14:33,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,004 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689675304004 2023-07-18 10:14:34,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 10:14:34,009 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 10:14:34,010 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 10:14:34,012 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 10:14:34,014 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:34,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 10:14:34,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 10:14:34,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 10:14:34,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 10:14:34,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 10:14:34,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 10:14:34,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 10:14:34,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 10:14:34,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 10:14:34,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675274039,5,FailOnTimeoutGroup] 2023-07-18 10:14:34,046 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:42163, startcode=1689675271845 2023-07-18 10:14:34,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675274039,5,FailOnTimeoutGroup] 2023-07-18 10:14:34,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 10:14:34,047 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:40033, startcode=1689675272048 2023-07-18 10:14:34,046 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:40931, startcode=1689675272348 2023-07-18 10:14:34,052 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:34,053 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:34,055 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 10:14:34,055 WARN [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-18 10:14:34,056 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:34,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,060 DEBUG [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 10:14:34,060 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 10:14:34,060 WARN [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-18 10:14:34,060 WARN [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-18 10:14:34,102 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:34,103 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:34,104 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796 2023-07-18 10:14:34,128 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:34,131 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 10:14:34,134 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info 2023-07-18 10:14:34,134 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 10:14:34,135 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:34,135 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 10:14:34,138 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:14:34,139 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 10:14:34,140 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:34,140 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 10:14:34,142 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table 2023-07-18 10:14:34,143 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 10:14:34,144 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:34,145 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740 2023-07-18 10:14:34,147 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740 2023-07-18 10:14:34,151 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 10:14:34,154 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 10:14:34,159 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:34,160 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11325911360, jitterRate=0.054807692766189575}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 10:14:34,160 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 10:14:34,160 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 10:14:34,160 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 10:14:34,160 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 10:14:34,160 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 10:14:34,160 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 10:14:34,161 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 10:14:34,161 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 10:14:34,168 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 10:14:34,168 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 10:14:34,178 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 10:14:34,191 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 10:14:34,194 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 10:14:34,257 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:42163, startcode=1689675271845 2023-07-18 10:14:34,261 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:40033, startcode=1689675272048 2023-07-18 10:14:34,261 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:40931, startcode=1689675272348 2023-07-18 10:14:34,263 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:34,265 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:14:34,265 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 10:14:34,269 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,269 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:14:34,270 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 10:14:34,270 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796 2023-07-18 10:14:34,270 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:34,270 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38869 2023-07-18 10:14:34,270 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:14:34,271 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796 2023-07-18 10:14:34,270 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39059 2023-07-18 10:14:34,271 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 10:14:34,271 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38869 2023-07-18 10:14:34,271 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39059 2023-07-18 10:14:34,272 DEBUG [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796 2023-07-18 10:14:34,272 DEBUG [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38869 2023-07-18 10:14:34,272 DEBUG [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39059 2023-07-18 10:14:34,281 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:34,281 DEBUG [RS:1;jenkins-hbase4:40033] zookeeper.ZKUtil(162): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:34,281 WARN [RS:1;jenkins-hbase4:40033] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:14:34,281 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ZKUtil(162): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:34,282 INFO [RS:1;jenkins-hbase4:40033] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:14:34,282 DEBUG [RS:2;jenkins-hbase4:40931] zookeeper.ZKUtil(162): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,282 WARN [RS:0;jenkins-hbase4:42163] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:14:34,283 WARN [RS:2;jenkins-hbase4:40931] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:14:34,283 DEBUG [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:34,283 INFO [RS:2;jenkins-hbase4:40931] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:14:34,283 INFO [RS:0;jenkins-hbase4:42163] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:14:34,283 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,284 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:34,284 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42163,1689675271845] 2023-07-18 10:14:34,284 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40931,1689675272348] 2023-07-18 10:14:34,284 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40033,1689675272048] 2023-07-18 10:14:34,298 DEBUG [RS:1;jenkins-hbase4:40033] zookeeper.ZKUtil(162): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:34,298 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ZKUtil(162): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:34,298 DEBUG [RS:2;jenkins-hbase4:40931] zookeeper.ZKUtil(162): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:34,299 DEBUG [RS:1;jenkins-hbase4:40033] zookeeper.ZKUtil(162): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,299 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ZKUtil(162): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,299 DEBUG [RS:2;jenkins-hbase4:40931] zookeeper.ZKUtil(162): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,299 DEBUG [RS:1;jenkins-hbase4:40033] zookeeper.ZKUtil(162): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:34,299 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ZKUtil(162): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:34,300 DEBUG [RS:2;jenkins-hbase4:40931] zookeeper.ZKUtil(162): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:34,311 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:14:34,311 DEBUG [RS:1;jenkins-hbase4:40033] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:14:34,311 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:14:34,322 INFO [RS:2;jenkins-hbase4:40931] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:14:34,322 INFO [RS:1;jenkins-hbase4:40033] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:14:34,323 INFO [RS:0;jenkins-hbase4:42163] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:14:34,346 DEBUG [jenkins-hbase4:42907] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 10:14:34,348 INFO [RS:0;jenkins-hbase4:42163] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:14:34,348 INFO [RS:2;jenkins-hbase4:40931] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:14:34,348 INFO [RS:1;jenkins-hbase4:40033] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:14:34,358 INFO [RS:2;jenkins-hbase4:40931] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:14:34,358 INFO [RS:2;jenkins-hbase4:40931] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,360 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:14:34,359 INFO [RS:1;jenkins-hbase4:40033] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:14:34,359 INFO [RS:0;jenkins-hbase4:42163] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:14:34,360 INFO [RS:1;jenkins-hbase4:40033] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,361 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,361 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:14:34,362 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:14:34,369 DEBUG [jenkins-hbase4:42907] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:34,372 DEBUG [jenkins-hbase4:42907] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:34,372 DEBUG [jenkins-hbase4:42907] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:34,372 DEBUG [jenkins-hbase4:42907] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:34,372 DEBUG [jenkins-hbase4:42907] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:34,372 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,372 INFO [RS:1;jenkins-hbase4:40033] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,372 INFO [RS:2;jenkins-hbase4:40931] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,372 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,372 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,373 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,373 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,373 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,373 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,374 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,374 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,374 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,374 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,374 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,374 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,374 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:14:34,373 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,374 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:14:34,375 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:2;jenkins-hbase4:40931] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,375 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:14:34,376 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,376 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,376 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,376 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40931,1689675272348, state=OPENING 2023-07-18 10:14:34,376 DEBUG [RS:1;jenkins-hbase4:40033] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:34,380 INFO [RS:2;jenkins-hbase4:40931] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,380 INFO [RS:2;jenkins-hbase4:40931] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,381 INFO [RS:2;jenkins-hbase4:40931] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,382 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,383 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,383 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,385 INFO [RS:1;jenkins-hbase4:40033] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,385 INFO [RS:1;jenkins-hbase4:40033] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,385 INFO [RS:1;jenkins-hbase4:40033] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,386 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 10:14:34,388 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:34,388 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 10:14:34,393 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:34,401 INFO [RS:1;jenkins-hbase4:40033] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:14:34,401 INFO [RS:0;jenkins-hbase4:42163] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:14:34,401 INFO [RS:2;jenkins-hbase4:40931] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:14:34,406 INFO [RS:2;jenkins-hbase4:40931] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40931,1689675272348-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,406 INFO [RS:1;jenkins-hbase4:40033] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40033,1689675272048-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,406 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42163,1689675271845-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,428 INFO [RS:1;jenkins-hbase4:40033] regionserver.Replication(203): jenkins-hbase4.apache.org,40033,1689675272048 started 2023-07-18 10:14:34,428 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40033,1689675272048, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40033, sessionid=0x10177ed05f80002 2023-07-18 10:14:34,428 DEBUG [RS:1;jenkins-hbase4:40033] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:14:34,428 DEBUG [RS:1;jenkins-hbase4:40033] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:34,428 DEBUG [RS:1;jenkins-hbase4:40033] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40033,1689675272048' 2023-07-18 10:14:34,429 DEBUG [RS:1;jenkins-hbase4:40033] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:14:34,430 DEBUG [RS:1;jenkins-hbase4:40033] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:14:34,430 DEBUG [RS:1;jenkins-hbase4:40033] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:14:34,430 DEBUG [RS:1;jenkins-hbase4:40033] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:14:34,430 DEBUG [RS:1;jenkins-hbase4:40033] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:34,431 DEBUG [RS:1;jenkins-hbase4:40033] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40033,1689675272048' 2023-07-18 10:14:34,431 DEBUG [RS:1;jenkins-hbase4:40033] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:14:34,431 DEBUG [RS:1;jenkins-hbase4:40033] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:14:34,431 INFO [RS:2;jenkins-hbase4:40931] regionserver.Replication(203): jenkins-hbase4.apache.org,40931,1689675272348 started 2023-07-18 10:14:34,432 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40931,1689675272348, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40931, sessionid=0x10177ed05f80003 2023-07-18 10:14:34,432 INFO [RS:0;jenkins-hbase4:42163] regionserver.Replication(203): jenkins-hbase4.apache.org,42163,1689675271845 started 2023-07-18 10:14:34,432 DEBUG [RS:2;jenkins-hbase4:40931] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:14:34,432 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42163,1689675271845, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42163, sessionid=0x10177ed05f80001 2023-07-18 10:14:34,432 DEBUG [RS:2;jenkins-hbase4:40931] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,433 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:14:34,433 DEBUG [RS:2;jenkins-hbase4:40931] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40931,1689675272348' 2023-07-18 10:14:34,433 DEBUG [RS:2;jenkins-hbase4:40931] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:14:34,433 DEBUG [RS:1;jenkins-hbase4:40033] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:14:34,433 DEBUG [RS:0;jenkins-hbase4:42163] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:34,433 INFO [RS:1;jenkins-hbase4:40033] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 10:14:34,434 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42163,1689675271845' 2023-07-18 10:14:34,434 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:14:34,434 INFO [RS:1;jenkins-hbase4:40033] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 10:14:34,434 DEBUG [RS:2;jenkins-hbase4:40931] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:14:34,434 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:14:34,434 DEBUG [RS:2;jenkins-hbase4:40931] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:14:34,435 DEBUG [RS:2;jenkins-hbase4:40931] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:14:34,435 DEBUG [RS:2;jenkins-hbase4:40931] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,437 DEBUG [RS:2;jenkins-hbase4:40931] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40931,1689675272348' 2023-07-18 10:14:34,437 DEBUG [RS:2;jenkins-hbase4:40931] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:14:34,438 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:14:34,438 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:14:34,438 DEBUG [RS:0;jenkins-hbase4:42163] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:34,438 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42163,1689675271845' 2023-07-18 10:14:34,438 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:14:34,438 DEBUG [RS:2;jenkins-hbase4:40931] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:14:34,438 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:14:34,438 DEBUG [RS:2;jenkins-hbase4:40931] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:14:34,438 INFO [RS:2;jenkins-hbase4:40931] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 10:14:34,438 INFO [RS:2;jenkins-hbase4:40931] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 10:14:34,439 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:14:34,439 INFO [RS:0;jenkins-hbase4:42163] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 10:14:34,439 INFO [RS:0;jenkins-hbase4:42163] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 10:14:34,546 INFO [RS:2;jenkins-hbase4:40931] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40931%2C1689675272348, suffix=, logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,40931,1689675272348, archiveDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs, maxLogs=32 2023-07-18 10:14:34,547 INFO [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42163%2C1689675271845, suffix=, logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,42163,1689675271845, archiveDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs, maxLogs=32 2023-07-18 10:14:34,548 INFO [RS:1;jenkins-hbase4:40033] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40033%2C1689675272048, suffix=, logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,40033,1689675272048, archiveDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs, maxLogs=32 2023-07-18 10:14:34,568 WARN [ReadOnlyZKClient-127.0.0.1:53154@0x2ffb11a2] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 10:14:34,592 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,594 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:14:34,598 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK] 2023-07-18 10:14:34,598 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK] 2023-07-18 10:14:34,598 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK] 2023-07-18 10:14:34,605 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK] 2023-07-18 10:14:34,605 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK] 2023-07-18 10:14:34,606 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK] 2023-07-18 10:14:34,619 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42907,1689675269765] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:14:34,623 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK] 2023-07-18 10:14:34,623 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK] 2023-07-18 10:14:34,623 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK] 2023-07-18 10:14:34,626 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50344, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:14:34,640 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50346, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:14:34,640 INFO [RS:1;jenkins-hbase4:40033] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,40033,1689675272048/jenkins-hbase4.apache.org%2C40033%2C1689675272048.1689675274556 2023-07-18 10:14:34,641 DEBUG [RS:1;jenkins-hbase4:40033] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK], DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK], DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK]] 2023-07-18 10:14:34,641 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40931] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:50346 deadline: 1689675334641, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:34,641 INFO [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,42163,1689675271845/jenkins-hbase4.apache.org%2C42163%2C1689675271845.1689675274556 2023-07-18 10:14:34,641 DEBUG [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK], DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK], DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK]] 2023-07-18 10:14:34,645 INFO [RS:2;jenkins-hbase4:40931] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,40931,1689675272348/jenkins-hbase4.apache.org%2C40931%2C1689675272348.1689675274556 2023-07-18 10:14:34,646 DEBUG [RS:2;jenkins-hbase4:40931] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK], DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK], DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK]] 2023-07-18 10:14:34,656 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 10:14:34,657 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:14:34,664 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40931%2C1689675272348.meta, suffix=.meta, logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,40931,1689675272348, archiveDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs, maxLogs=32 2023-07-18 10:14:34,686 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK] 2023-07-18 10:14:34,687 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK] 2023-07-18 10:14:34,688 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK] 2023-07-18 10:14:34,692 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,40931,1689675272348/jenkins-hbase4.apache.org%2C40931%2C1689675272348.meta.1689675274665.meta 2023-07-18 10:14:34,693 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK], DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK], DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK]] 2023-07-18 10:14:34,693 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:34,695 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 10:14:34,697 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 10:14:34,699 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 10:14:34,704 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 10:14:34,704 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:34,704 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 10:14:34,705 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 10:14:34,707 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 10:14:34,709 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info 2023-07-18 10:14:34,709 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info 2023-07-18 10:14:34,709 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 10:14:34,710 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:34,710 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 10:14:34,712 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:14:34,712 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:14:34,712 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 10:14:34,713 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:34,713 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 10:14:34,714 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table 2023-07-18 10:14:34,714 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table 2023-07-18 10:14:34,715 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 10:14:34,716 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:34,717 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740 2023-07-18 10:14:34,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740 2023-07-18 10:14:34,722 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 10:14:34,725 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 10:14:34,726 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10880372480, jitterRate=0.013313651084899902}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 10:14:34,727 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 10:14:34,740 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689675274588 2023-07-18 10:14:34,757 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 10:14:34,758 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 10:14:34,759 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40931,1689675272348, state=OPEN 2023-07-18 10:14:34,761 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 10:14:34,761 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 10:14:34,765 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 10:14:34,765 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40931,1689675272348 in 368 msec 2023-07-18 10:14:34,775 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 10:14:34,775 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 588 msec 2023-07-18 10:14:34,780 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 973 msec 2023-07-18 10:14:34,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689675274781, completionTime=-1 2023-07-18 10:14:34,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 10:14:34,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 10:14:34,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 10:14:34,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689675334833 2023-07-18 10:14:34,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689675394833 2023-07-18 10:14:34,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 51 msec 2023-07-18 10:14:34,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42907,1689675269765-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42907,1689675269765-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42907,1689675269765-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42907, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:34,859 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 10:14:34,870 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 10:14:34,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:34,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 10:14:34,885 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:14:34,888 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:14:34,904 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:34,906 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf empty. 2023-07-18 10:14:34,907 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:34,907 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 10:14:34,946 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:34,948 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6fb842bd011abbe63e3755e261be5bdf, NAME => 'hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:34,965 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:34,965 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6fb842bd011abbe63e3755e261be5bdf, disabling compactions & flushes 2023-07-18 10:14:34,965 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:34,965 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:34,965 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. after waiting 0 ms 2023-07-18 10:14:34,965 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:34,965 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:34,965 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6fb842bd011abbe63e3755e261be5bdf: 2023-07-18 10:14:34,969 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:14:34,994 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675274973"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675274973"}]},"ts":"1689675274973"} 2023-07-18 10:14:35,029 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:14:35,032 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:14:35,037 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675275032"}]},"ts":"1689675275032"} 2023-07-18 10:14:35,043 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 10:14:35,054 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:35,054 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:35,054 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:35,054 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:35,054 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:35,057 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6fb842bd011abbe63e3755e261be5bdf, ASSIGN}] 2023-07-18 10:14:35,060 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6fb842bd011abbe63e3755e261be5bdf, ASSIGN 2023-07-18 10:14:35,062 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6fb842bd011abbe63e3755e261be5bdf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40931,1689675272348; forceNewPlan=false, retain=false 2023-07-18 10:14:35,170 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42907,1689675269765] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:35,173 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42907,1689675269765] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 10:14:35,176 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:14:35,179 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:14:35,184 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:35,186 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e empty. 2023-07-18 10:14:35,187 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:35,187 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 10:14:35,213 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:14:35,215 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6fb842bd011abbe63e3755e261be5bdf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:35,215 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675275214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675275214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675275214"}]},"ts":"1689675275214"} 2023-07-18 10:14:35,218 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:35,220 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => c279e5fb45e4dd6ee6ca1bf14c1ea18e, NAME => 'hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:35,223 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 6fb842bd011abbe63e3755e261be5bdf, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:35,241 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:35,242 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing c279e5fb45e4dd6ee6ca1bf14c1ea18e, disabling compactions & flushes 2023-07-18 10:14:35,242 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:35,242 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:35,242 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. after waiting 0 ms 2023-07-18 10:14:35,242 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:35,242 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:35,242 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for c279e5fb45e4dd6ee6ca1bf14c1ea18e: 2023-07-18 10:14:35,245 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:14:35,247 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675275247"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675275247"}]},"ts":"1689675275247"} 2023-07-18 10:14:35,253 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:14:35,255 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:14:35,255 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675275255"}]},"ts":"1689675275255"} 2023-07-18 10:14:35,257 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 10:14:35,261 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:35,262 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:35,262 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:35,262 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:35,262 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:35,262 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c279e5fb45e4dd6ee6ca1bf14c1ea18e, ASSIGN}] 2023-07-18 10:14:35,265 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c279e5fb45e4dd6ee6ca1bf14c1ea18e, ASSIGN 2023-07-18 10:14:35,266 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=c279e5fb45e4dd6ee6ca1bf14c1ea18e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40931,1689675272348; forceNewPlan=false, retain=false 2023-07-18 10:14:35,383 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:35,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6fb842bd011abbe63e3755e261be5bdf, NAME => 'hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:35,385 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:35,385 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:35,385 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:35,385 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:35,390 INFO [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:35,395 DEBUG [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/info 2023-07-18 10:14:35,395 DEBUG [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/info 2023-07-18 10:14:35,396 INFO [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6fb842bd011abbe63e3755e261be5bdf columnFamilyName info 2023-07-18 10:14:35,397 INFO [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] regionserver.HStore(310): Store=6fb842bd011abbe63e3755e261be5bdf/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:35,398 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:35,400 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:35,405 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:35,410 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:35,415 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6fb842bd011abbe63e3755e261be5bdf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10720804480, jitterRate=-0.0015472769737243652}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:35,415 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6fb842bd011abbe63e3755e261be5bdf: 2023-07-18 10:14:35,417 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:14:35,418 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=c279e5fb45e4dd6ee6ca1bf14c1ea18e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:35,418 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf., pid=7, masterSystemTime=1689675275377 2023-07-18 10:14:35,418 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675275418"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675275418"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675275418"}]},"ts":"1689675275418"} 2023-07-18 10:14:35,424 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:35,424 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:35,424 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure c279e5fb45e4dd6ee6ca1bf14c1ea18e, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:35,425 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6fb842bd011abbe63e3755e261be5bdf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:35,426 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675275425"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675275425"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675275425"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675275425"}]},"ts":"1689675275425"} 2023-07-18 10:14:35,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 10:14:35,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 6fb842bd011abbe63e3755e261be5bdf, server=jenkins-hbase4.apache.org,40931,1689675272348 in 207 msec 2023-07-18 10:14:35,438 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 10:14:35,438 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6fb842bd011abbe63e3755e261be5bdf, ASSIGN in 377 msec 2023-07-18 10:14:35,440 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:14:35,441 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675275441"}]},"ts":"1689675275441"} 2023-07-18 10:14:35,443 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 10:14:35,447 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:14:35,451 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 575 msec 2023-07-18 10:14:35,486 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 10:14:35,489 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:14:35,489 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:35,522 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 10:14:35,536 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:14:35,542 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 26 msec 2023-07-18 10:14:35,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 10:14:35,548 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-18 10:14:35,548 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 10:14:35,583 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:35,583 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c279e5fb45e4dd6ee6ca1bf14c1ea18e, NAME => 'hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:35,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 10:14:35,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. service=MultiRowMutationService 2023-07-18 10:14:35,585 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 10:14:35,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:35,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:35,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:35,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:35,588 INFO [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:35,590 DEBUG [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/m 2023-07-18 10:14:35,591 DEBUG [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/m 2023-07-18 10:14:35,591 INFO [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c279e5fb45e4dd6ee6ca1bf14c1ea18e columnFamilyName m 2023-07-18 10:14:35,592 INFO [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] regionserver.HStore(310): Store=c279e5fb45e4dd6ee6ca1bf14c1ea18e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:35,593 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:35,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:35,598 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:35,601 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:35,602 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c279e5fb45e4dd6ee6ca1bf14c1ea18e; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@770f0130, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:35,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c279e5fb45e4dd6ee6ca1bf14c1ea18e: 2023-07-18 10:14:35,604 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e., pid=9, masterSystemTime=1689675275578 2023-07-18 10:14:35,608 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:35,608 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:35,609 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=c279e5fb45e4dd6ee6ca1bf14c1ea18e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:35,610 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675275609"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675275609"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675275609"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675275609"}]},"ts":"1689675275609"} 2023-07-18 10:14:35,620 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-18 10:14:35,620 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure c279e5fb45e4dd6ee6ca1bf14c1ea18e, server=jenkins-hbase4.apache.org,40931,1689675272348 in 192 msec 2023-07-18 10:14:35,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 10:14:35,631 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=c279e5fb45e4dd6ee6ca1bf14c1ea18e, ASSIGN in 358 msec 2023-07-18 10:14:35,640 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:14:35,650 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 103 msec 2023-07-18 10:14:35,652 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:14:35,652 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675275652"}]},"ts":"1689675275652"} 2023-07-18 10:14:35,655 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 10:14:35,658 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:14:35,661 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 488 msec 2023-07-18 10:14:35,666 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 10:14:35,670 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 10:14:35,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.085sec 2023-07-18 10:14:35,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 10:14:35,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 10:14:35,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 10:14:35,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42907,1689675269765-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 10:14:35,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42907,1689675269765-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 10:14:35,683 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 10:14:35,683 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 10:14:35,691 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 10:14:35,748 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:35,748 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:35,751 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 10:14:35,759 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 10:14:35,767 DEBUG [Listener at localhost/45689] zookeeper.ReadOnlyZKClient(139): Connect 0x0064b392 to 127.0.0.1:53154 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:14:35,775 DEBUG [Listener at localhost/45689] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5648af83, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:14:35,795 DEBUG [hconnection-0x5f7045aa-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:14:35,808 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50348, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:14:35,822 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:35,824 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:35,836 DEBUG [Listener at localhost/45689] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 10:14:35,840 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40186, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 10:14:35,853 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 10:14:35,854 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:35,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 10:14:35,861 DEBUG [Listener at localhost/45689] zookeeper.ReadOnlyZKClient(139): Connect 0x6ec31a2b to 127.0.0.1:53154 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:14:35,866 DEBUG [Listener at localhost/45689] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@466435b1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:14:35,867 INFO [Listener at localhost/45689] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:53154 2023-07-18 10:14:35,872 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:14:35,873 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10177ed05f8000a connected 2023-07-18 10:14:35,901 INFO [Listener at localhost/45689] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=421, OpenFileDescriptor=673, MaxFileDescriptor=60000, SystemLoadAverage=504, ProcessCount=173, AvailableMemoryMB=3150 2023-07-18 10:14:35,904 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-18 10:14:35,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:35,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:35,978 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 10:14:35,991 INFO [Listener at localhost/45689] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:14:35,992 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:35,992 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:35,992 INFO [Listener at localhost/45689] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:14:35,992 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:14:35,992 INFO [Listener at localhost/45689] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:14:35,992 INFO [Listener at localhost/45689] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:14:35,997 INFO [Listener at localhost/45689] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35633 2023-07-18 10:14:35,997 INFO [Listener at localhost/45689] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:14:35,998 DEBUG [Listener at localhost/45689] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:14:36,000 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:36,001 INFO [Listener at localhost/45689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:36,002 INFO [Listener at localhost/45689] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35633 connecting to ZooKeeper ensemble=127.0.0.1:53154 2023-07-18 10:14:36,005 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:356330x0, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:14:36,006 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(162): regionserver:356330x0, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 10:14:36,008 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35633-0x10177ed05f8000b connected 2023-07-18 10:14:36,008 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(162): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 10:14:36,009 DEBUG [Listener at localhost/45689] zookeeper.ZKUtil(164): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:14:36,010 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35633 2023-07-18 10:14:36,010 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35633 2023-07-18 10:14:36,014 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35633 2023-07-18 10:14:36,015 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35633 2023-07-18 10:14:36,015 DEBUG [Listener at localhost/45689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35633 2023-07-18 10:14:36,017 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:14:36,017 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:14:36,018 INFO [Listener at localhost/45689] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:14:36,018 INFO [Listener at localhost/45689] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:14:36,018 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:14:36,018 INFO [Listener at localhost/45689] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:14:36,019 INFO [Listener at localhost/45689] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:14:36,019 INFO [Listener at localhost/45689] http.HttpServer(1146): Jetty bound to port 44927 2023-07-18 10:14:36,019 INFO [Listener at localhost/45689] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:14:36,020 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:36,021 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1621780c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:14:36,021 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:36,021 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b8a7a95{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:14:36,171 INFO [Listener at localhost/45689] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:14:36,172 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:14:36,172 INFO [Listener at localhost/45689] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:14:36,172 INFO [Listener at localhost/45689] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 10:14:36,174 INFO [Listener at localhost/45689] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:14:36,175 INFO [Listener at localhost/45689] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3576228c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/java.io.tmpdir/jetty-0_0_0_0-44927-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4135955071739304791/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:14:36,178 INFO [Listener at localhost/45689] server.AbstractConnector(333): Started ServerConnector@191c4c74{HTTP/1.1, (http/1.1)}{0.0.0.0:44927} 2023-07-18 10:14:36,178 INFO [Listener at localhost/45689] server.Server(415): Started @11889ms 2023-07-18 10:14:36,181 INFO [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(951): ClusterId : b6e04fb6-9321-429c-8da0-022bc4479b58 2023-07-18 10:14:36,185 DEBUG [RS:3;jenkins-hbase4:35633] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:14:36,187 DEBUG [RS:3;jenkins-hbase4:35633] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:14:36,187 DEBUG [RS:3;jenkins-hbase4:35633] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:14:36,189 DEBUG [RS:3;jenkins-hbase4:35633] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:14:36,191 DEBUG [RS:3;jenkins-hbase4:35633] zookeeper.ReadOnlyZKClient(139): Connect 0x39f9e47b to 127.0.0.1:53154 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:14:36,199 DEBUG [RS:3;jenkins-hbase4:35633] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33018f0b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:14:36,200 DEBUG [RS:3;jenkins-hbase4:35633] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7603070f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:14:36,210 DEBUG [RS:3;jenkins-hbase4:35633] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:35633 2023-07-18 10:14:36,210 INFO [RS:3;jenkins-hbase4:35633] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:14:36,210 INFO [RS:3;jenkins-hbase4:35633] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:14:36,210 DEBUG [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:14:36,211 INFO [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42907,1689675269765 with isa=jenkins-hbase4.apache.org/172.31.14.131:35633, startcode=1689675275991 2023-07-18 10:14:36,211 DEBUG [RS:3;jenkins-hbase4:35633] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:14:36,215 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53763, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:14:36,216 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42907] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:36,216 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:14:36,217 DEBUG [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796 2023-07-18 10:14:36,217 DEBUG [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38869 2023-07-18 10:14:36,217 DEBUG [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39059 2023-07-18 10:14:36,221 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:36,221 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:36,221 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:36,221 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:36,221 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:36,222 DEBUG [RS:3;jenkins-hbase4:35633] zookeeper.ZKUtil(162): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:36,222 WARN [RS:3;jenkins-hbase4:35633] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:14:36,222 INFO [RS:3;jenkins-hbase4:35633] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:14:36,222 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35633,1689675275991] 2023-07-18 10:14:36,222 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 10:14:36,223 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:36,223 DEBUG [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:36,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:36,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:36,229 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:36,229 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42907,1689675269765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 10:14:36,229 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:36,229 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:36,230 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:36,230 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:36,231 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:36,231 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:36,231 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:36,231 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:36,236 DEBUG [RS:3;jenkins-hbase4:35633] zookeeper.ZKUtil(162): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:36,237 DEBUG [RS:3;jenkins-hbase4:35633] zookeeper.ZKUtil(162): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:36,238 DEBUG [RS:3;jenkins-hbase4:35633] zookeeper.ZKUtil(162): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:36,238 DEBUG [RS:3;jenkins-hbase4:35633] zookeeper.ZKUtil(162): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:36,240 DEBUG [RS:3;jenkins-hbase4:35633] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:14:36,240 INFO [RS:3;jenkins-hbase4:35633] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:14:36,243 INFO [RS:3;jenkins-hbase4:35633] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:14:36,243 INFO [RS:3;jenkins-hbase4:35633] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:14:36,243 INFO [RS:3;jenkins-hbase4:35633] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:36,246 INFO [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:14:36,249 INFO [RS:3;jenkins-hbase4:35633] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:36,249 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:36,249 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:36,249 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:36,249 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:36,249 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:36,249 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:14:36,249 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:36,249 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:36,250 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:36,250 DEBUG [RS:3;jenkins-hbase4:35633] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:14:36,255 INFO [RS:3;jenkins-hbase4:35633] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:36,255 INFO [RS:3;jenkins-hbase4:35633] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:36,255 INFO [RS:3;jenkins-hbase4:35633] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:36,273 INFO [RS:3;jenkins-hbase4:35633] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:14:36,273 INFO [RS:3;jenkins-hbase4:35633] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35633,1689675275991-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:14:36,289 INFO [RS:3;jenkins-hbase4:35633] regionserver.Replication(203): jenkins-hbase4.apache.org,35633,1689675275991 started 2023-07-18 10:14:36,289 INFO [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35633,1689675275991, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35633, sessionid=0x10177ed05f8000b 2023-07-18 10:14:36,289 DEBUG [RS:3;jenkins-hbase4:35633] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:14:36,289 DEBUG [RS:3;jenkins-hbase4:35633] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:36,289 DEBUG [RS:3;jenkins-hbase4:35633] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35633,1689675275991' 2023-07-18 10:14:36,289 DEBUG [RS:3;jenkins-hbase4:35633] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:14:36,290 DEBUG [RS:3;jenkins-hbase4:35633] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:14:36,290 DEBUG [RS:3;jenkins-hbase4:35633] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:14:36,291 DEBUG [RS:3;jenkins-hbase4:35633] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:14:36,291 DEBUG [RS:3;jenkins-hbase4:35633] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:36,291 DEBUG [RS:3;jenkins-hbase4:35633] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35633,1689675275991' 2023-07-18 10:14:36,291 DEBUG [RS:3;jenkins-hbase4:35633] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:14:36,292 DEBUG [RS:3;jenkins-hbase4:35633] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:14:36,293 DEBUG [RS:3;jenkins-hbase4:35633] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:14:36,293 INFO [RS:3;jenkins-hbase4:35633] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 10:14:36,293 INFO [RS:3;jenkins-hbase4:35633] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 10:14:36,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:36,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:36,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:36,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:36,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:36,313 DEBUG [hconnection-0x297c531f-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:14:36,319 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50352, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:14:36,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:36,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:36,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:36,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:36,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:40186 deadline: 1689676476339, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:36,342 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:36,345 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:36,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:36,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:36,347 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:36,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:36,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:36,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:36,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:36,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:36,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:36,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:36,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:36,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:36,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:36,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:36,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:36,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:36,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:36,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:36,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:36,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:36,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 10:14:36,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048] are moved back to default 2023-07-18 10:14:36,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:36,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:36,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:36,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:36,396 INFO [RS:3;jenkins-hbase4:35633] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35633%2C1689675275991, suffix=, logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,35633,1689675275991, archiveDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs, maxLogs=32 2023-07-18 10:14:36,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:36,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:36,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:36,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:36,426 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:14:36,427 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK] 2023-07-18 10:14:36,428 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK] 2023-07-18 10:14:36,428 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK] 2023-07-18 10:14:36,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-18 10:14:36,441 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:36,442 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:36,442 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:36,443 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:36,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 10:14:36,447 INFO [RS:3;jenkins-hbase4:35633] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,35633,1689675275991/jenkins-hbase4.apache.org%2C35633%2C1689675275991.1689675276397 2023-07-18 10:14:36,449 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:14:36,458 DEBUG [RS:3;jenkins-hbase4:35633] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK], DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK], DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK]] 2023-07-18 10:14:36,458 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:36,459 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:36,462 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db empty. 2023-07-18 10:14:36,463 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:36,463 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:36,463 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:36,463 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2 empty. 2023-07-18 10:14:36,463 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:36,463 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a empty. 2023-07-18 10:14:36,464 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4 empty. 2023-07-18 10:14:36,464 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:36,464 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 empty. 2023-07-18 10:14:36,464 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:36,465 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:36,465 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:36,465 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 10:14:36,502 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:36,504 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => f6b1daae9da3cb2f310946b5123a72db, NAME => 'Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:36,507 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 974f3092d70118b627077e1fc3fa861a, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:36,507 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1505fbb029f19f5e2eaf1bcb2ea37bc2, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:36,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 10:14:36,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:36,607 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 974f3092d70118b627077e1fc3fa861a, disabling compactions & flushes 2023-07-18 10:14:36,607 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:36,607 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:36,607 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. after waiting 0 ms 2023-07-18 10:14:36,607 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:36,607 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:36,607 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 974f3092d70118b627077e1fc3fa861a: 2023-07-18 10:14:36,608 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 19d27a294b025821541b9f52606e06d4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:36,608 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:36,620 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing f6b1daae9da3cb2f310946b5123a72db, disabling compactions & flushes 2023-07-18 10:14:36,620 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:36,620 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:36,620 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. after waiting 0 ms 2023-07-18 10:14:36,620 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:36,620 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:36,620 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for f6b1daae9da3cb2f310946b5123a72db: 2023-07-18 10:14:36,624 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:36,627 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:36,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 1505fbb029f19f5e2eaf1bcb2ea37bc2, disabling compactions & flushes 2023-07-18 10:14:36,629 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:36,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:36,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. after waiting 0 ms 2023-07-18 10:14:36,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:36,630 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:36,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 1505fbb029f19f5e2eaf1bcb2ea37bc2: 2023-07-18 10:14:36,651 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:36,652 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, disabling compactions & flushes 2023-07-18 10:14:36,652 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:36,652 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:36,652 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. after waiting 0 ms 2023-07-18 10:14:36,652 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:36,652 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:36,652 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6: 2023-07-18 10:14:36,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:36,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 19d27a294b025821541b9f52606e06d4, disabling compactions & flushes 2023-07-18 10:14:36,656 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:36,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:36,657 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. after waiting 0 ms 2023-07-18 10:14:36,657 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:36,657 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:36,657 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 19d27a294b025821541b9f52606e06d4: 2023-07-18 10:14:36,667 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:14:36,669 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675276669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675276669"}]},"ts":"1689675276669"} 2023-07-18 10:14:36,669 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675276669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675276669"}]},"ts":"1689675276669"} 2023-07-18 10:14:36,669 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675276669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675276669"}]},"ts":"1689675276669"} 2023-07-18 10:14:36,670 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675276669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675276669"}]},"ts":"1689675276669"} 2023-07-18 10:14:36,670 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675276669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675276669"}]},"ts":"1689675276669"} 2023-07-18 10:14:36,727 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 10:14:36,728 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:14:36,729 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675276729"}]},"ts":"1689675276729"} 2023-07-18 10:14:36,731 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 10:14:36,740 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:36,740 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:36,740 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:36,740 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:36,741 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, ASSIGN}] 2023-07-18 10:14:36,744 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, ASSIGN 2023-07-18 10:14:36,744 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, ASSIGN 2023-07-18 10:14:36,745 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, ASSIGN 2023-07-18 10:14:36,745 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, ASSIGN 2023-07-18 10:14:36,748 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, ASSIGN 2023-07-18 10:14:36,748 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40931,1689675272348; forceNewPlan=false, retain=false 2023-07-18 10:14:36,748 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40931,1689675272348; forceNewPlan=false, retain=false 2023-07-18 10:14:36,748 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:36,748 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40931,1689675272348; forceNewPlan=false, retain=false 2023-07-18 10:14:36,751 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:36,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 10:14:36,899 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 10:14:36,903 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=974f3092d70118b627077e1fc3fa861a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:36,903 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f6b1daae9da3cb2f310946b5123a72db, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:36,903 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=1505fbb029f19f5e2eaf1bcb2ea37bc2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:36,903 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675276903"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675276903"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675276903"}]},"ts":"1689675276903"} 2023-07-18 10:14:36,903 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675276903"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675276903"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675276903"}]},"ts":"1689675276903"} 2023-07-18 10:14:36,904 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675276903"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675276903"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675276903"}]},"ts":"1689675276903"} 2023-07-18 10:14:36,903 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=19d27a294b025821541b9f52606e06d4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:36,903 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:36,904 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675276903"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675276903"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675276903"}]},"ts":"1689675276903"} 2023-07-18 10:14:36,904 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675276903"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675276903"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675276903"}]},"ts":"1689675276903"} 2023-07-18 10:14:36,906 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure 974f3092d70118b627077e1fc3fa861a, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:36,908 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=13, state=RUNNABLE; OpenRegionProcedure f6b1daae9da3cb2f310946b5123a72db, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:36,910 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=14, state=RUNNABLE; OpenRegionProcedure 1505fbb029f19f5e2eaf1bcb2ea37bc2, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:36,912 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=16, state=RUNNABLE; OpenRegionProcedure 19d27a294b025821541b9f52606e06d4, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:36,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=17, state=RUNNABLE; OpenRegionProcedure 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:37,063 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:37,063 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:14:37,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 10:14:37,067 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39486, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:14:37,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:37,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 974f3092d70118b627077e1fc3fa861a, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 10:14:37,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:37,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:37,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:37,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:37,071 INFO [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:37,073 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:37,073 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 10:14:37,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:37,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:37,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:37,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:37,075 DEBUG [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/f 2023-07-18 10:14:37,075 DEBUG [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/f 2023-07-18 10:14:37,075 INFO [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 974f3092d70118b627077e1fc3fa861a columnFamilyName f 2023-07-18 10:14:37,076 INFO [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:37,076 INFO [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] regionserver.HStore(310): Store=974f3092d70118b627077e1fc3fa861a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:37,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:37,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:37,079 DEBUG [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/f 2023-07-18 10:14:37,079 DEBUG [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/f 2023-07-18 10:14:37,080 INFO [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 columnFamilyName f 2023-07-18 10:14:37,081 INFO [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] regionserver.HStore(310): Store=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:37,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:37,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:37,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:37,088 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:37,088 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 974f3092d70118b627077e1fc3fa861a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10842942400, jitterRate=0.00982770323753357}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:37,088 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 974f3092d70118b627077e1fc3fa861a: 2023-07-18 10:14:37,089 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:37,090 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a., pid=18, masterSystemTime=1689675277061 2023-07-18 10:14:37,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:37,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:37,093 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:37,093 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:37,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1505fbb029f19f5e2eaf1bcb2ea37bc2, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 10:14:37,094 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11921075680, jitterRate=0.11023668944835663}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:37,094 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=974f3092d70118b627077e1fc3fa861a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:37,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:37,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:37,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:37,094 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675277094"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675277094"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675277094"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675277094"}]},"ts":"1689675277094"} 2023-07-18 10:14:37,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:37,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6: 2023-07-18 10:14:37,097 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6., pid=22, masterSystemTime=1689675277063 2023-07-18 10:14:37,099 INFO [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:37,101 DEBUG [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/f 2023-07-18 10:14:37,101 DEBUG [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/f 2023-07-18 10:14:37,102 INFO [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1505fbb029f19f5e2eaf1bcb2ea37bc2 columnFamilyName f 2023-07-18 10:14:37,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:37,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:37,104 INFO [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] regionserver.HStore(310): Store=1505fbb029f19f5e2eaf1bcb2ea37bc2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:37,104 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:37,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6b1daae9da3cb2f310946b5123a72db, NAME => 'Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 10:14:37,105 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:37,105 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675277105"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675277105"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675277105"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675277105"}]},"ts":"1689675277105"} 2023-07-18 10:14:37,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:37,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:37,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:37,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:37,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:37,107 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-07-18 10:14:37,107 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure 974f3092d70118b627077e1fc3fa861a, server=jenkins-hbase4.apache.org,40931,1689675272348 in 193 msec 2023-07-18 10:14:37,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:37,110 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, ASSIGN in 366 msec 2023-07-18 10:14:37,112 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=17 2023-07-18 10:14:37,112 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=17, state=SUCCESS; OpenRegionProcedure 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, server=jenkins-hbase4.apache.org,42163,1689675271845 in 194 msec 2023-07-18 10:14:37,114 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, ASSIGN in 371 msec 2023-07-18 10:14:37,117 INFO [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:37,119 DEBUG [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/f 2023-07-18 10:14:37,119 DEBUG [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/f 2023-07-18 10:14:37,120 INFO [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6b1daae9da3cb2f310946b5123a72db columnFamilyName f 2023-07-18 10:14:37,121 INFO [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] regionserver.HStore(310): Store=f6b1daae9da3cb2f310946b5123a72db/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:37,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:37,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:37,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:37,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:37,126 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1505fbb029f19f5e2eaf1bcb2ea37bc2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10511177280, jitterRate=-0.02107033133506775}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:37,126 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1505fbb029f19f5e2eaf1bcb2ea37bc2: 2023-07-18 10:14:37,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:37,128 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2., pid=20, masterSystemTime=1689675277061 2023-07-18 10:14:37,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:37,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:37,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:37,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:37,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6b1daae9da3cb2f310946b5123a72db; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11735455680, jitterRate=0.09294947981834412}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:37,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6b1daae9da3cb2f310946b5123a72db: 2023-07-18 10:14:37,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 19d27a294b025821541b9f52606e06d4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 10:14:37,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:37,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:37,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:37,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:37,133 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db., pid=19, masterSystemTime=1689675277063 2023-07-18 10:14:37,133 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=1505fbb029f19f5e2eaf1bcb2ea37bc2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:37,133 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675277133"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675277133"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675277133"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675277133"}]},"ts":"1689675277133"} 2023-07-18 10:14:37,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:37,136 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:37,136 INFO [StoreOpener-19d27a294b025821541b9f52606e06d4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:37,137 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f6b1daae9da3cb2f310946b5123a72db, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:37,138 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675277137"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675277137"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675277137"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675277137"}]},"ts":"1689675277137"} 2023-07-18 10:14:37,139 DEBUG [StoreOpener-19d27a294b025821541b9f52606e06d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/f 2023-07-18 10:14:37,139 DEBUG [StoreOpener-19d27a294b025821541b9f52606e06d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/f 2023-07-18 10:14:37,140 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=14 2023-07-18 10:14:37,141 INFO [StoreOpener-19d27a294b025821541b9f52606e06d4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 19d27a294b025821541b9f52606e06d4 columnFamilyName f 2023-07-18 10:14:37,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=14, state=SUCCESS; OpenRegionProcedure 1505fbb029f19f5e2eaf1bcb2ea37bc2, server=jenkins-hbase4.apache.org,40931,1689675272348 in 227 msec 2023-07-18 10:14:37,142 INFO [StoreOpener-19d27a294b025821541b9f52606e06d4-1] regionserver.HStore(310): Store=19d27a294b025821541b9f52606e06d4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:37,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:37,143 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, ASSIGN in 400 msec 2023-07-18 10:14:37,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:37,144 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=13 2023-07-18 10:14:37,145 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=13, state=SUCCESS; OpenRegionProcedure f6b1daae9da3cb2f310946b5123a72db, server=jenkins-hbase4.apache.org,42163,1689675271845 in 233 msec 2023-07-18 10:14:37,147 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, ASSIGN in 404 msec 2023-07-18 10:14:37,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:37,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:37,152 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 19d27a294b025821541b9f52606e06d4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11543952800, jitterRate=0.07511438429355621}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:37,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 19d27a294b025821541b9f52606e06d4: 2023-07-18 10:14:37,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4., pid=21, masterSystemTime=1689675277061 2023-07-18 10:14:37,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:37,155 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:37,156 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=19d27a294b025821541b9f52606e06d4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:37,156 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675277156"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675277156"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675277156"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675277156"}]},"ts":"1689675277156"} 2023-07-18 10:14:37,161 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=16 2023-07-18 10:14:37,161 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=16, state=SUCCESS; OpenRegionProcedure 19d27a294b025821541b9f52606e06d4, server=jenkins-hbase4.apache.org,40931,1689675272348 in 246 msec 2023-07-18 10:14:37,163 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-07-18 10:14:37,165 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, ASSIGN in 420 msec 2023-07-18 10:14:37,165 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:14:37,166 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675277166"}]},"ts":"1689675277166"} 2023-07-18 10:14:37,174 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 10:14:37,177 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:14:37,185 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 756 msec 2023-07-18 10:14:37,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 10:14:37,569 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-18 10:14:37,569 DEBUG [Listener at localhost/45689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-18 10:14:37,570 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:37,577 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-18 10:14:37,578 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:37,579 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-18 10:14:37,579 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:37,585 DEBUG [Listener at localhost/45689] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:14:37,596 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36656, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:14:37,600 DEBUG [Listener at localhost/45689] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:14:37,606 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39296, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:14:37,607 DEBUG [Listener at localhost/45689] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:14:37,611 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50362, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:14:37,613 DEBUG [Listener at localhost/45689] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:14:37,616 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39496, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:14:37,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:37,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:37,630 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:37,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:37,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:37,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:37,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:37,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:37,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:37,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region f6b1daae9da3cb2f310946b5123a72db to RSGroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:37,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:37,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:37,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:37,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:37,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:37,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, REOPEN/MOVE 2023-07-18 10:14:37,651 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, REOPEN/MOVE 2023-07-18 10:14:37,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region 1505fbb029f19f5e2eaf1bcb2ea37bc2 to RSGroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:37,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:37,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:37,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:37,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:37,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:37,652 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=f6b1daae9da3cb2f310946b5123a72db, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:37,653 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675277652"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675277652"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675277652"}]},"ts":"1689675277652"} 2023-07-18 10:14:37,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, REOPEN/MOVE 2023-07-18 10:14:37,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region 974f3092d70118b627077e1fc3fa861a to RSGroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:37,654 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, REOPEN/MOVE 2023-07-18 10:14:37,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:37,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:37,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:37,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:37,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:37,656 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=1505fbb029f19f5e2eaf1bcb2ea37bc2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:37,656 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675277656"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675277656"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675277656"}]},"ts":"1689675277656"} 2023-07-18 10:14:37,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, REOPEN/MOVE 2023-07-18 10:14:37,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region 19d27a294b025821541b9f52606e06d4 to RSGroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:37,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; CloseRegionProcedure f6b1daae9da3cb2f310946b5123a72db, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:37,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:37,659 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, REOPEN/MOVE 2023-07-18 10:14:37,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:37,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:37,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:37,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:37,661 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=24, state=RUNNABLE; CloseRegionProcedure 1505fbb029f19f5e2eaf1bcb2ea37bc2, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:37,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, REOPEN/MOVE 2023-07-18 10:14:37,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 to RSGroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:37,664 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=974f3092d70118b627077e1fc3fa861a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:37,664 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, REOPEN/MOVE 2023-07-18 10:14:37,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:37,664 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675277663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675277663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675277663"}]},"ts":"1689675277663"} 2023-07-18 10:14:37,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:37,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:37,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:37,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:37,667 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=19d27a294b025821541b9f52606e06d4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:37,667 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675277667"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675277667"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675277667"}]},"ts":"1689675277667"} 2023-07-18 10:14:37,668 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=25, state=RUNNABLE; CloseRegionProcedure 974f3092d70118b627077e1fc3fa861a, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:37,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, REOPEN/MOVE 2023-07-18 10:14:37,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_125047047, current retry=0 2023-07-18 10:14:37,671 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, REOPEN/MOVE 2023-07-18 10:14:37,674 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:37,674 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675277674"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675277674"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675277674"}]},"ts":"1689675277674"} 2023-07-18 10:14:37,675 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 19d27a294b025821541b9f52606e06d4, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:37,679 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:37,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:37,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:37,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 19d27a294b025821541b9f52606e06d4, disabling compactions & flushes 2023-07-18 10:14:37,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:37,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:37,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. after waiting 0 ms 2023-07-18 10:14:37,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:37,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6b1daae9da3cb2f310946b5123a72db, disabling compactions & flushes 2023-07-18 10:14:37,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:37,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:37,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. after waiting 0 ms 2023-07-18 10:14:37,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:37,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:37,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:37,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6b1daae9da3cb2f310946b5123a72db: 2023-07-18 10:14:37,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f6b1daae9da3cb2f310946b5123a72db move to jenkins-hbase4.apache.org,40033,1689675272048 record at close sequenceid=2 2023-07-18 10:14:37,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:37,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:37,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:37,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:37,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 19d27a294b025821541b9f52606e06d4: 2023-07-18 10:14:37,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, disabling compactions & flushes 2023-07-18 10:14:37,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 19d27a294b025821541b9f52606e06d4 move to jenkins-hbase4.apache.org,40033,1689675272048 record at close sequenceid=2 2023-07-18 10:14:37,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:37,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:37,842 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=f6b1daae9da3cb2f310946b5123a72db, regionState=CLOSED 2023-07-18 10:14:37,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. after waiting 0 ms 2023-07-18 10:14:37,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:37,842 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675277842"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675277842"}]},"ts":"1689675277842"} 2023-07-18 10:14:37,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:37,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:37,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1505fbb029f19f5e2eaf1bcb2ea37bc2, disabling compactions & flushes 2023-07-18 10:14:37,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:37,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:37,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. after waiting 0 ms 2023-07-18 10:14:37,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:37,847 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=19d27a294b025821541b9f52606e06d4, regionState=CLOSED 2023-07-18 10:14:37,847 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675277847"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675277847"}]},"ts":"1689675277847"} 2023-07-18 10:14:37,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-18 10:14:37,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; CloseRegionProcedure f6b1daae9da3cb2f310946b5123a72db, server=jenkins-hbase4.apache.org,42163,1689675271845 in 188 msec 2023-07-18 10:14:37,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:37,853 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:37,853 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6: 2023-07-18 10:14:37,853 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 move to jenkins-hbase4.apache.org,35633,1689675275991 record at close sequenceid=2 2023-07-18 10:14:37,853 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40033,1689675272048; forceNewPlan=false, retain=false 2023-07-18 10:14:37,857 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-18 10:14:37,857 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 19d27a294b025821541b9f52606e06d4, server=jenkins-hbase4.apache.org,40931,1689675272348 in 176 msec 2023-07-18 10:14:37,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:37,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:37,858 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, regionState=CLOSED 2023-07-18 10:14:37,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1505fbb029f19f5e2eaf1bcb2ea37bc2: 2023-07-18 10:14:37,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1505fbb029f19f5e2eaf1bcb2ea37bc2 move to jenkins-hbase4.apache.org,35633,1689675275991 record at close sequenceid=2 2023-07-18 10:14:37,858 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675277858"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675277858"}]},"ts":"1689675277858"} 2023-07-18 10:14:37,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:37,859 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40033,1689675272048; forceNewPlan=false, retain=false 2023-07-18 10:14:37,861 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:37,861 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:37,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 974f3092d70118b627077e1fc3fa861a, disabling compactions & flushes 2023-07-18 10:14:37,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:37,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:37,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. after waiting 0 ms 2023-07-18 10:14:37,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:37,863 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=1505fbb029f19f5e2eaf1bcb2ea37bc2, regionState=CLOSED 2023-07-18 10:14:37,863 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675277863"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675277863"}]},"ts":"1689675277863"} 2023-07-18 10:14:37,866 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-18 10:14:37,866 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, server=jenkins-hbase4.apache.org,42163,1689675271845 in 182 msec 2023-07-18 10:14:37,868 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:37,870 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=24 2023-07-18 10:14:37,870 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; CloseRegionProcedure 1505fbb029f19f5e2eaf1bcb2ea37bc2, server=jenkins-hbase4.apache.org,40931,1689675272348 in 205 msec 2023-07-18 10:14:37,871 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:37,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:37,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:37,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 974f3092d70118b627077e1fc3fa861a: 2023-07-18 10:14:37,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 974f3092d70118b627077e1fc3fa861a move to jenkins-hbase4.apache.org,35633,1689675275991 record at close sequenceid=2 2023-07-18 10:14:37,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:37,876 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=974f3092d70118b627077e1fc3fa861a, regionState=CLOSED 2023-07-18 10:14:37,876 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675277876"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675277876"}]},"ts":"1689675277876"} 2023-07-18 10:14:37,881 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=25 2023-07-18 10:14:37,881 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=25, state=SUCCESS; CloseRegionProcedure 974f3092d70118b627077e1fc3fa861a, server=jenkins-hbase4.apache.org,40931,1689675272348 in 210 msec 2023-07-18 10:14:37,882 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:38,004 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 10:14:38,004 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=f6b1daae9da3cb2f310946b5123a72db, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:38,004 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=1505fbb029f19f5e2eaf1bcb2ea37bc2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,004 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=974f3092d70118b627077e1fc3fa861a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,004 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,005 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278004"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278004"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278004"}]},"ts":"1689675278004"} 2023-07-18 10:14:38,005 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278004"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278004"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278004"}]},"ts":"1689675278004"} 2023-07-18 10:14:38,004 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=19d27a294b025821541b9f52606e06d4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:38,005 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675278004"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278004"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278004"}]},"ts":"1689675278004"} 2023-07-18 10:14:38,005 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675278004"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278004"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278004"}]},"ts":"1689675278004"} 2023-07-18 10:14:38,005 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278004"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278004"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278004"}]},"ts":"1689675278004"} 2023-07-18 10:14:38,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=24, state=RUNNABLE; OpenRegionProcedure 1505fbb029f19f5e2eaf1bcb2ea37bc2, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:38,009 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=25, state=RUNNABLE; OpenRegionProcedure 974f3092d70118b627077e1fc3fa861a, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:38,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=29, state=RUNNABLE; OpenRegionProcedure 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:38,013 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=23, state=RUNNABLE; OpenRegionProcedure f6b1daae9da3cb2f310946b5123a72db, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:38,016 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=27, state=RUNNABLE; OpenRegionProcedure 19d27a294b025821541b9f52606e06d4, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:38,162 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,163 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:14:38,164 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36670, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:14:38,167 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:38,167 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:14:38,170 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:38,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1505fbb029f19f5e2eaf1bcb2ea37bc2, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 10:14:38,171 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39298, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:14:38,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:38,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:38,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:38,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:38,179 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:38,179 INFO [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:38,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6b1daae9da3cb2f310946b5123a72db, NAME => 'Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 10:14:38,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:38,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:38,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:38,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:38,181 DEBUG [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/f 2023-07-18 10:14:38,181 DEBUG [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/f 2023-07-18 10:14:38,182 INFO [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1505fbb029f19f5e2eaf1bcb2ea37bc2 columnFamilyName f 2023-07-18 10:14:38,183 INFO [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:38,183 INFO [StoreOpener-1505fbb029f19f5e2eaf1bcb2ea37bc2-1] regionserver.HStore(310): Store=1505fbb029f19f5e2eaf1bcb2ea37bc2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:38,186 DEBUG [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/f 2023-07-18 10:14:38,186 DEBUG [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/f 2023-07-18 10:14:38,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:38,188 INFO [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6b1daae9da3cb2f310946b5123a72db columnFamilyName f 2023-07-18 10:14:38,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:38,189 INFO [StoreOpener-f6b1daae9da3cb2f310946b5123a72db-1] regionserver.HStore(310): Store=f6b1daae9da3cb2f310946b5123a72db/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:38,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:38,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:38,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:38,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1505fbb029f19f5e2eaf1bcb2ea37bc2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9911202080, jitterRate=-0.07694737613201141}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:38,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1505fbb029f19f5e2eaf1bcb2ea37bc2: 2023-07-18 10:14:38,203 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2., pid=33, masterSystemTime=1689675278162 2023-07-18 10:14:38,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:38,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6b1daae9da3cb2f310946b5123a72db; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11605231680, jitterRate=0.08082142472267151}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:38,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6b1daae9da3cb2f310946b5123a72db: 2023-07-18 10:14:38,209 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=1505fbb029f19f5e2eaf1bcb2ea37bc2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,210 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278209"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675278209"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675278209"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675278209"}]},"ts":"1689675278209"} 2023-07-18 10:14:38,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:38,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:38,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db., pid=36, masterSystemTime=1689675278167 2023-07-18 10:14:38,217 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:38,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 10:14:38,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:38,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:38,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:38,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:38,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:38,223 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=f6b1daae9da3cb2f310946b5123a72db, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:38,223 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675278223"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675278223"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675278223"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675278223"}]},"ts":"1689675278223"} 2023-07-18 10:14:38,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:38,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:38,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 19d27a294b025821541b9f52606e06d4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 10:14:38,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:38,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:38,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:38,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:38,228 INFO [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:38,228 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=24 2023-07-18 10:14:38,229 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=24, state=SUCCESS; OpenRegionProcedure 1505fbb029f19f5e2eaf1bcb2ea37bc2, server=jenkins-hbase4.apache.org,35633,1689675275991 in 212 msec 2023-07-18 10:14:38,229 INFO [StoreOpener-19d27a294b025821541b9f52606e06d4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:38,230 DEBUG [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/f 2023-07-18 10:14:38,230 DEBUG [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/f 2023-07-18 10:14:38,231 INFO [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 columnFamilyName f 2023-07-18 10:14:38,231 DEBUG [StoreOpener-19d27a294b025821541b9f52606e06d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/f 2023-07-18 10:14:38,231 DEBUG [StoreOpener-19d27a294b025821541b9f52606e06d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/f 2023-07-18 10:14:38,232 INFO [StoreOpener-19d27a294b025821541b9f52606e06d4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 19d27a294b025821541b9f52606e06d4 columnFamilyName f 2023-07-18 10:14:38,232 INFO [StoreOpener-8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6-1] regionserver.HStore(310): Store=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:38,233 INFO [StoreOpener-19d27a294b025821541b9f52606e06d4-1] regionserver.HStore(310): Store=19d27a294b025821541b9f52606e06d4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:38,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:38,235 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, REOPEN/MOVE in 577 msec 2023-07-18 10:14:38,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:38,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:38,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:38,240 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=23 2023-07-18 10:14:38,240 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=23, state=SUCCESS; OpenRegionProcedure f6b1daae9da3cb2f310946b5123a72db, server=jenkins-hbase4.apache.org,40033,1689675272048 in 219 msec 2023-07-18 10:14:38,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:38,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:38,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 19d27a294b025821541b9f52606e06d4; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11722321600, jitterRate=0.09172627329826355}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:38,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 19d27a294b025821541b9f52606e06d4: 2023-07-18 10:14:38,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11067338720, jitterRate=0.03072623908519745}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:38,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6: 2023-07-18 10:14:38,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6., pid=35, masterSystemTime=1689675278162 2023-07-18 10:14:38,248 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, REOPEN/MOVE in 591 msec 2023-07-18 10:14:38,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4., pid=37, masterSystemTime=1689675278167 2023-07-18 10:14:38,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:38,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:38,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:38,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 974f3092d70118b627077e1fc3fa861a, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 10:14:38,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:38,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:38,252 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:38,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:38,252 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675278252"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675278252"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675278252"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675278252"}]},"ts":"1689675278252"} 2023-07-18 10:14:38,252 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:38,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:38,253 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=19d27a294b025821541b9f52606e06d4, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:38,253 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278253"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675278253"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675278253"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675278253"}]},"ts":"1689675278253"} 2023-07-18 10:14:38,261 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=29 2023-07-18 10:14:38,261 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=29, state=SUCCESS; OpenRegionProcedure 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, server=jenkins-hbase4.apache.org,35633,1689675275991 in 245 msec 2023-07-18 10:14:38,262 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=27 2023-07-18 10:14:38,262 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=27, state=SUCCESS; OpenRegionProcedure 19d27a294b025821541b9f52606e06d4, server=jenkins-hbase4.apache.org,40033,1689675272048 in 243 msec 2023-07-18 10:14:38,263 INFO [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:38,265 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, REOPEN/MOVE in 597 msec 2023-07-18 10:14:38,266 DEBUG [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/f 2023-07-18 10:14:38,266 DEBUG [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/f 2023-07-18 10:14:38,266 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, REOPEN/MOVE in 602 msec 2023-07-18 10:14:38,266 INFO [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 974f3092d70118b627077e1fc3fa861a columnFamilyName f 2023-07-18 10:14:38,267 INFO [StoreOpener-974f3092d70118b627077e1fc3fa861a-1] regionserver.HStore(310): Store=974f3092d70118b627077e1fc3fa861a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:38,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:38,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:38,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:38,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 974f3092d70118b627077e1fc3fa861a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10792022560, jitterRate=0.005085423588752747}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:38,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 974f3092d70118b627077e1fc3fa861a: 2023-07-18 10:14:38,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a., pid=34, masterSystemTime=1689675278162 2023-07-18 10:14:38,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:38,279 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:38,280 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=974f3092d70118b627077e1fc3fa861a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,280 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278280"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675278280"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675278280"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675278280"}]},"ts":"1689675278280"} 2023-07-18 10:14:38,286 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=25 2023-07-18 10:14:38,287 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=25, state=SUCCESS; OpenRegionProcedure 974f3092d70118b627077e1fc3fa861a, server=jenkins-hbase4.apache.org,35633,1689675275991 in 273 msec 2023-07-18 10:14:38,289 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, REOPEN/MOVE in 632 msec 2023-07-18 10:14:38,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-18 10:14:38,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_125047047. 2023-07-18 10:14:38,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:38,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:38,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:38,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:38,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:38,683 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:38,690 INFO [Listener at localhost/45689] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:38,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:38,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:38,706 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675278706"}]},"ts":"1689675278706"} 2023-07-18 10:14:38,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-18 10:14:38,708 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 10:14:38,710 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 10:14:38,711 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, UNASSIGN}] 2023-07-18 10:14:38,714 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, UNASSIGN 2023-07-18 10:14:38,714 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, UNASSIGN 2023-07-18 10:14:38,714 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, UNASSIGN 2023-07-18 10:14:38,714 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, UNASSIGN 2023-07-18 10:14:38,714 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, UNASSIGN 2023-07-18 10:14:38,715 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=1505fbb029f19f5e2eaf1bcb2ea37bc2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,715 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=f6b1daae9da3cb2f310946b5123a72db, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:38,715 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278715"}]},"ts":"1689675278715"} 2023-07-18 10:14:38,715 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675278715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278715"}]},"ts":"1689675278715"} 2023-07-18 10:14:38,715 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=974f3092d70118b627077e1fc3fa861a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,715 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=19d27a294b025821541b9f52606e06d4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:38,716 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278715"}]},"ts":"1689675278715"} 2023-07-18 10:14:38,715 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:38,716 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278715"}]},"ts":"1689675278715"} 2023-07-18 10:14:38,716 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675278715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675278715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675278715"}]},"ts":"1689675278715"} 2023-07-18 10:14:38,717 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=40, state=RUNNABLE; CloseRegionProcedure 1505fbb029f19f5e2eaf1bcb2ea37bc2, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:38,718 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=39, state=RUNNABLE; CloseRegionProcedure f6b1daae9da3cb2f310946b5123a72db, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:38,719 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=41, state=RUNNABLE; CloseRegionProcedure 974f3092d70118b627077e1fc3fa861a, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:38,721 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=42, state=RUNNABLE; CloseRegionProcedure 19d27a294b025821541b9f52606e06d4, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:38,723 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=43, state=RUNNABLE; CloseRegionProcedure 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:38,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-18 10:14:38,870 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:38,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 974f3092d70118b627077e1fc3fa861a, disabling compactions & flushes 2023-07-18 10:14:38,872 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:38,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:38,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. after waiting 0 ms 2023-07-18 10:14:38,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:38,872 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:38,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 19d27a294b025821541b9f52606e06d4, disabling compactions & flushes 2023-07-18 10:14:38,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:38,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:38,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. after waiting 0 ms 2023-07-18 10:14:38,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:38,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:38,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:38,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4. 2023-07-18 10:14:38,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a. 2023-07-18 10:14:38,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 19d27a294b025821541b9f52606e06d4: 2023-07-18 10:14:38,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 974f3092d70118b627077e1fc3fa861a: 2023-07-18 10:14:38,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:38,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:38,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6b1daae9da3cb2f310946b5123a72db, disabling compactions & flushes 2023-07-18 10:14:38,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:38,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:38,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. after waiting 0 ms 2023-07-18 10:14:38,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:38,887 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=19d27a294b025821541b9f52606e06d4, regionState=CLOSED 2023-07-18 10:14:38,888 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278887"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675278887"}]},"ts":"1689675278887"} 2023-07-18 10:14:38,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:38,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:38,891 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=974f3092d70118b627077e1fc3fa861a, regionState=CLOSED 2023-07-18 10:14:38,891 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278891"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675278891"}]},"ts":"1689675278891"} 2023-07-18 10:14:38,894 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=42 2023-07-18 10:14:38,894 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; CloseRegionProcedure 19d27a294b025821541b9f52606e06d4, server=jenkins-hbase4.apache.org,40033,1689675272048 in 169 msec 2023-07-18 10:14:38,900 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=41 2023-07-18 10:14:38,900 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; CloseRegionProcedure 974f3092d70118b627077e1fc3fa861a, server=jenkins-hbase4.apache.org,35633,1689675275991 in 174 msec 2023-07-18 10:14:38,900 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=19d27a294b025821541b9f52606e06d4, UNASSIGN in 183 msec 2023-07-18 10:14:38,902 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=974f3092d70118b627077e1fc3fa861a, UNASSIGN in 189 msec 2023-07-18 10:14:38,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, disabling compactions & flushes 2023-07-18 10:14:38,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:38,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:38,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. after waiting 0 ms 2023-07-18 10:14:38,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:38,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:38,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:38,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db. 2023-07-18 10:14:38,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6b1daae9da3cb2f310946b5123a72db: 2023-07-18 10:14:38,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6. 2023-07-18 10:14:38,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6: 2023-07-18 10:14:38,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:38,915 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=f6b1daae9da3cb2f310946b5123a72db, regionState=CLOSED 2023-07-18 10:14:38,915 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675278915"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675278915"}]},"ts":"1689675278915"} 2023-07-18 10:14:38,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:38,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:38,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1505fbb029f19f5e2eaf1bcb2ea37bc2, disabling compactions & flushes 2023-07-18 10:14:38,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:38,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:38,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. after waiting 0 ms 2023-07-18 10:14:38,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:38,918 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, regionState=CLOSED 2023-07-18 10:14:38,918 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675278918"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675278918"}]},"ts":"1689675278918"} 2023-07-18 10:14:38,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=43 2023-07-18 10:14:38,926 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=43, state=SUCCESS; CloseRegionProcedure 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, server=jenkins-hbase4.apache.org,35633,1689675275991 in 197 msec 2023-07-18 10:14:38,926 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=39 2023-07-18 10:14:38,926 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=39, state=SUCCESS; CloseRegionProcedure f6b1daae9da3cb2f310946b5123a72db, server=jenkins-hbase4.apache.org,40033,1689675272048 in 201 msec 2023-07-18 10:14:38,928 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, UNASSIGN in 215 msec 2023-07-18 10:14:38,928 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f6b1daae9da3cb2f310946b5123a72db, UNASSIGN in 215 msec 2023-07-18 10:14:38,929 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:38,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2. 2023-07-18 10:14:38,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1505fbb029f19f5e2eaf1bcb2ea37bc2: 2023-07-18 10:14:38,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:38,937 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=1505fbb029f19f5e2eaf1bcb2ea37bc2, regionState=CLOSED 2023-07-18 10:14:38,938 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675278937"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675278937"}]},"ts":"1689675278937"} 2023-07-18 10:14:38,943 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=40 2023-07-18 10:14:38,943 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=40, state=SUCCESS; CloseRegionProcedure 1505fbb029f19f5e2eaf1bcb2ea37bc2, server=jenkins-hbase4.apache.org,35633,1689675275991 in 223 msec 2023-07-18 10:14:38,946 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=38 2023-07-18 10:14:38,946 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1505fbb029f19f5e2eaf1bcb2ea37bc2, UNASSIGN in 232 msec 2023-07-18 10:14:38,948 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675278948"}]},"ts":"1689675278948"} 2023-07-18 10:14:38,950 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 10:14:38,952 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 10:14:38,956 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 258 msec 2023-07-18 10:14:39,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-18 10:14:39,011 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-18 10:14:39,013 INFO [Listener at localhost/45689] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:39,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:39,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-18 10:14:39,035 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-18 10:14:39,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-18 10:14:39,053 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:39,053 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:39,053 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:39,053 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:39,053 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:39,058 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/recovered.edits] 2023-07-18 10:14:39,059 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/recovered.edits] 2023-07-18 10:14:39,060 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/recovered.edits] 2023-07-18 10:14:39,061 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/recovered.edits] 2023-07-18 10:14:39,061 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/recovered.edits] 2023-07-18 10:14:39,082 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/recovered.edits/7.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2/recovered.edits/7.seqid 2023-07-18 10:14:39,083 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/recovered.edits/7.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a/recovered.edits/7.seqid 2023-07-18 10:14:39,084 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/recovered.edits/7.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4/recovered.edits/7.seqid 2023-07-18 10:14:39,084 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1505fbb029f19f5e2eaf1bcb2ea37bc2 2023-07-18 10:14:39,085 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/recovered.edits/7.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6/recovered.edits/7.seqid 2023-07-18 10:14:39,085 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/974f3092d70118b627077e1fc3fa861a 2023-07-18 10:14:39,086 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/19d27a294b025821541b9f52606e06d4 2023-07-18 10:14:39,086 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6 2023-07-18 10:14:39,088 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/recovered.edits/7.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db/recovered.edits/7.seqid 2023-07-18 10:14:39,089 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f6b1daae9da3cb2f310946b5123a72db 2023-07-18 10:14:39,089 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 10:14:39,123 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 10:14:39,128 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 10:14:39,129 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 10:14:39,129 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675279129"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:39,129 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675279129"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:39,129 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675279129"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:39,129 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675279129"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:39,129 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675279129"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:39,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-18 10:14:39,141 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 10:14:39,142 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f6b1daae9da3cb2f310946b5123a72db, NAME => 'Group_testTableMoveTruncateAndDrop,,1689675276412.f6b1daae9da3cb2f310946b5123a72db.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 1505fbb029f19f5e2eaf1bcb2ea37bc2, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689675276412.1505fbb029f19f5e2eaf1bcb2ea37bc2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 974f3092d70118b627077e1fc3fa861a, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675276412.974f3092d70118b627077e1fc3fa861a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 19d27a294b025821541b9f52606e06d4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675276412.19d27a294b025821541b9f52606e06d4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689675276412.8fa3aa6ae4bb8e055cbf8c2cbd5fc7c6.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 10:14:39,142 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 10:14:39,142 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689675279142"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:39,145 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 10:14:39,154 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:39,155 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:39,155 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:39,155 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:39,155 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:39,156 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11 empty. 2023-07-18 10:14:39,156 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf empty. 2023-07-18 10:14:39,156 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a empty. 2023-07-18 10:14:39,156 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa empty. 2023-07-18 10:14:39,156 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b empty. 2023-07-18 10:14:39,157 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:39,157 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:39,158 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:39,157 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:39,157 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:39,158 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 10:14:39,186 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:39,188 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 46f9f3bf090854793e798f4237b37d11, NAME => 'Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:39,188 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => cc9945fdc8a63ea595d9821857e656bf, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:39,189 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 17ef7ed6f73f3a8b8148bbc87735c8fa, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:39,255 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,256 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 17ef7ed6f73f3a8b8148bbc87735c8fa, disabling compactions & flushes 2023-07-18 10:14:39,256 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:39,256 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:39,256 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. after waiting 0 ms 2023-07-18 10:14:39,256 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:39,256 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:39,256 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 17ef7ed6f73f3a8b8148bbc87735c8fa: 2023-07-18 10:14:39,256 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 4c6b5065947d11b27bc8d42108b2407b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:39,291 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,291 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,292 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing cc9945fdc8a63ea595d9821857e656bf, disabling compactions & flushes 2023-07-18 10:14:39,292 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:39,292 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:39,292 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. after waiting 0 ms 2023-07-18 10:14:39,292 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:39,292 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:39,292 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for cc9945fdc8a63ea595d9821857e656bf: 2023-07-18 10:14:39,292 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 46f9f3bf090854793e798f4237b37d11, disabling compactions & flushes 2023-07-18 10:14:39,292 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:39,292 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => c8a374203e1ad01005820f1a69d8a29a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:39,292 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:39,293 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. after waiting 0 ms 2023-07-18 10:14:39,293 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:39,293 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:39,293 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 46f9f3bf090854793e798f4237b37d11: 2023-07-18 10:14:39,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-18 10:14:39,341 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,341 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing c8a374203e1ad01005820f1a69d8a29a, disabling compactions & flushes 2023-07-18 10:14:39,341 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:39,341 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:39,341 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. after waiting 0 ms 2023-07-18 10:14:39,341 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:39,341 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:39,342 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for c8a374203e1ad01005820f1a69d8a29a: 2023-07-18 10:14:39,345 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,345 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 4c6b5065947d11b27bc8d42108b2407b, disabling compactions & flushes 2023-07-18 10:14:39,345 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:39,345 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:39,345 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. after waiting 0 ms 2023-07-18 10:14:39,345 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:39,346 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:39,346 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 4c6b5065947d11b27bc8d42108b2407b: 2023-07-18 10:14:39,351 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675279351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675279351"}]},"ts":"1689675279351"} 2023-07-18 10:14:39,352 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675279351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675279351"}]},"ts":"1689675279351"} 2023-07-18 10:14:39,352 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675279351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675279351"}]},"ts":"1689675279351"} 2023-07-18 10:14:39,352 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675279351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675279351"}]},"ts":"1689675279351"} 2023-07-18 10:14:39,352 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675279351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675279351"}]},"ts":"1689675279351"} 2023-07-18 10:14:39,356 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 10:14:39,357 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675279357"}]},"ts":"1689675279357"} 2023-07-18 10:14:39,359 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 10:14:39,365 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:39,365 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:39,365 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:39,365 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:39,369 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46f9f3bf090854793e798f4237b37d11, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ef7ed6f73f3a8b8148bbc87735c8fa, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc9945fdc8a63ea595d9821857e656bf, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6b5065947d11b27bc8d42108b2407b, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8a374203e1ad01005820f1a69d8a29a, ASSIGN}] 2023-07-18 10:14:39,372 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46f9f3bf090854793e798f4237b37d11, ASSIGN 2023-07-18 10:14:39,372 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ef7ed6f73f3a8b8148bbc87735c8fa, ASSIGN 2023-07-18 10:14:39,372 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc9945fdc8a63ea595d9821857e656bf, ASSIGN 2023-07-18 10:14:39,372 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8a374203e1ad01005820f1a69d8a29a, ASSIGN 2023-07-18 10:14:39,373 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6b5065947d11b27bc8d42108b2407b, ASSIGN 2023-07-18 10:14:39,374 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46f9f3bf090854793e798f4237b37d11, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:39,374 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ef7ed6f73f3a8b8148bbc87735c8fa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:39,374 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8a374203e1ad01005820f1a69d8a29a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40033,1689675272048; forceNewPlan=false, retain=false 2023-07-18 10:14:39,374 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc9945fdc8a63ea595d9821857e656bf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40033,1689675272048; forceNewPlan=false, retain=false 2023-07-18 10:14:39,375 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6b5065947d11b27bc8d42108b2407b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:39,524 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 10:14:39,527 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=c8a374203e1ad01005820f1a69d8a29a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:39,527 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=cc9945fdc8a63ea595d9821857e656bf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:39,527 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=17ef7ed6f73f3a8b8148bbc87735c8fa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:39,527 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=46f9f3bf090854793e798f4237b37d11, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:39,528 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675279527"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675279527"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675279527"}]},"ts":"1689675279527"} 2023-07-18 10:14:39,527 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=4c6b5065947d11b27bc8d42108b2407b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:39,528 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675279527"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675279527"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675279527"}]},"ts":"1689675279527"} 2023-07-18 10:14:39,528 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675279527"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675279527"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675279527"}]},"ts":"1689675279527"} 2023-07-18 10:14:39,528 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675279527"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675279527"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675279527"}]},"ts":"1689675279527"} 2023-07-18 10:14:39,528 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675279527"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675279527"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675279527"}]},"ts":"1689675279527"} 2023-07-18 10:14:39,531 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=51, state=RUNNABLE; OpenRegionProcedure 17ef7ed6f73f3a8b8148bbc87735c8fa, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:39,532 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=50, state=RUNNABLE; OpenRegionProcedure 46f9f3bf090854793e798f4237b37d11, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:39,534 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=53, state=RUNNABLE; OpenRegionProcedure 4c6b5065947d11b27bc8d42108b2407b, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:39,535 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=52, state=RUNNABLE; OpenRegionProcedure cc9945fdc8a63ea595d9821857e656bf, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:39,538 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=54, state=RUNNABLE; OpenRegionProcedure c8a374203e1ad01005820f1a69d8a29a, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:39,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-18 10:14:39,689 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:39,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c6b5065947d11b27bc8d42108b2407b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 10:14:39,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:39,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:39,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:39,692 INFO [StoreOpener-4c6b5065947d11b27bc8d42108b2407b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:39,694 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:39,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8a374203e1ad01005820f1a69d8a29a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 10:14:39,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:39,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:39,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:39,695 DEBUG [StoreOpener-4c6b5065947d11b27bc8d42108b2407b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b/f 2023-07-18 10:14:39,695 DEBUG [StoreOpener-4c6b5065947d11b27bc8d42108b2407b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b/f 2023-07-18 10:14:39,695 INFO [StoreOpener-4c6b5065947d11b27bc8d42108b2407b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c6b5065947d11b27bc8d42108b2407b columnFamilyName f 2023-07-18 10:14:39,699 INFO [StoreOpener-4c6b5065947d11b27bc8d42108b2407b-1] regionserver.HStore(310): Store=4c6b5065947d11b27bc8d42108b2407b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:39,700 INFO [StoreOpener-c8a374203e1ad01005820f1a69d8a29a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:39,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:39,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:39,703 DEBUG [StoreOpener-c8a374203e1ad01005820f1a69d8a29a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a/f 2023-07-18 10:14:39,703 DEBUG [StoreOpener-c8a374203e1ad01005820f1a69d8a29a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a/f 2023-07-18 10:14:39,703 INFO [StoreOpener-c8a374203e1ad01005820f1a69d8a29a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8a374203e1ad01005820f1a69d8a29a columnFamilyName f 2023-07-18 10:14:39,704 INFO [StoreOpener-c8a374203e1ad01005820f1a69d8a29a-1] regionserver.HStore(310): Store=c8a374203e1ad01005820f1a69d8a29a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:39,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:39,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:39,707 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:39,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:39,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:39,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:39,742 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4c6b5065947d11b27bc8d42108b2407b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10856669120, jitterRate=0.011106103658676147}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:39,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4c6b5065947d11b27bc8d42108b2407b: 2023-07-18 10:14:39,742 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c8a374203e1ad01005820f1a69d8a29a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11030611840, jitterRate=0.027305781841278076}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:39,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c8a374203e1ad01005820f1a69d8a29a: 2023-07-18 10:14:39,743 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b., pid=57, masterSystemTime=1689675279684 2023-07-18 10:14:39,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:39,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:39,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:39,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 46f9f3bf090854793e798f4237b37d11, NAME => 'Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 10:14:39,747 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=4c6b5065947d11b27bc8d42108b2407b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:39,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:39,747 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675279747"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675279747"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675279747"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675279747"}]},"ts":"1689675279747"} 2023-07-18 10:14:39,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:39,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:39,749 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a., pid=59, masterSystemTime=1689675279689 2023-07-18 10:14:39,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:39,751 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:39,751 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:39,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc9945fdc8a63ea595d9821857e656bf, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 10:14:39,751 INFO [StoreOpener-46f9f3bf090854793e798f4237b37d11-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:39,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:39,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:39,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:39,754 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=c8a374203e1ad01005820f1a69d8a29a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:39,755 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675279754"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675279754"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675279754"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675279754"}]},"ts":"1689675279754"} 2023-07-18 10:14:39,756 INFO [StoreOpener-cc9945fdc8a63ea595d9821857e656bf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:39,757 DEBUG [StoreOpener-46f9f3bf090854793e798f4237b37d11-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11/f 2023-07-18 10:14:39,757 DEBUG [StoreOpener-46f9f3bf090854793e798f4237b37d11-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11/f 2023-07-18 10:14:39,757 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=53 2023-07-18 10:14:39,757 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=53, state=SUCCESS; OpenRegionProcedure 4c6b5065947d11b27bc8d42108b2407b, server=jenkins-hbase4.apache.org,35633,1689675275991 in 216 msec 2023-07-18 10:14:39,757 INFO [StoreOpener-46f9f3bf090854793e798f4237b37d11-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 46f9f3bf090854793e798f4237b37d11 columnFamilyName f 2023-07-18 10:14:39,758 INFO [StoreOpener-46f9f3bf090854793e798f4237b37d11-1] regionserver.HStore(310): Store=46f9f3bf090854793e798f4237b37d11/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:39,760 DEBUG [StoreOpener-cc9945fdc8a63ea595d9821857e656bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf/f 2023-07-18 10:14:39,760 DEBUG [StoreOpener-cc9945fdc8a63ea595d9821857e656bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf/f 2023-07-18 10:14:39,760 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6b5065947d11b27bc8d42108b2407b, ASSIGN in 388 msec 2023-07-18 10:14:39,760 INFO [StoreOpener-cc9945fdc8a63ea595d9821857e656bf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc9945fdc8a63ea595d9821857e656bf columnFamilyName f 2023-07-18 10:14:39,763 INFO [StoreOpener-cc9945fdc8a63ea595d9821857e656bf-1] regionserver.HStore(310): Store=cc9945fdc8a63ea595d9821857e656bf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:39,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:39,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:39,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:39,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=54 2023-07-18 10:14:39,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=54, state=SUCCESS; OpenRegionProcedure c8a374203e1ad01005820f1a69d8a29a, server=jenkins-hbase4.apache.org,40033,1689675272048 in 221 msec 2023-07-18 10:14:39,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:39,785 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8a374203e1ad01005820f1a69d8a29a, ASSIGN in 405 msec 2023-07-18 10:14:39,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:39,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:39,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:39,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:39,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc9945fdc8a63ea595d9821857e656bf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10339935360, jitterRate=-0.03701847791671753}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:39,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc9945fdc8a63ea595d9821857e656bf: 2023-07-18 10:14:39,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf., pid=58, masterSystemTime=1689675279689 2023-07-18 10:14:39,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 46f9f3bf090854793e798f4237b37d11; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11204252800, jitterRate=0.04347735643386841}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:39,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 46f9f3bf090854793e798f4237b37d11: 2023-07-18 10:14:39,812 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11., pid=56, masterSystemTime=1689675279684 2023-07-18 10:14:39,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:39,814 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:39,815 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=cc9945fdc8a63ea595d9821857e656bf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:39,815 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675279815"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675279815"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675279815"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675279815"}]},"ts":"1689675279815"} 2023-07-18 10:14:39,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:39,816 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:39,816 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:39,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 17ef7ed6f73f3a8b8148bbc87735c8fa, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 10:14:39,816 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=46f9f3bf090854793e798f4237b37d11, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:39,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:39,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:39,816 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675279816"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675279816"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675279816"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675279816"}]},"ts":"1689675279816"} 2023-07-18 10:14:39,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:39,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:39,820 INFO [StoreOpener-17ef7ed6f73f3a8b8148bbc87735c8fa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:39,821 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=52 2023-07-18 10:14:39,822 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=52, state=SUCCESS; OpenRegionProcedure cc9945fdc8a63ea595d9821857e656bf, server=jenkins-hbase4.apache.org,40033,1689675272048 in 282 msec 2023-07-18 10:14:39,824 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=50 2023-07-18 10:14:39,824 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=50, state=SUCCESS; OpenRegionProcedure 46f9f3bf090854793e798f4237b37d11, server=jenkins-hbase4.apache.org,35633,1689675275991 in 287 msec 2023-07-18 10:14:39,824 DEBUG [StoreOpener-17ef7ed6f73f3a8b8148bbc87735c8fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa/f 2023-07-18 10:14:39,824 DEBUG [StoreOpener-17ef7ed6f73f3a8b8148bbc87735c8fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa/f 2023-07-18 10:14:39,825 INFO [StoreOpener-17ef7ed6f73f3a8b8148bbc87735c8fa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 17ef7ed6f73f3a8b8148bbc87735c8fa columnFamilyName f 2023-07-18 10:14:39,826 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc9945fdc8a63ea595d9821857e656bf, ASSIGN in 453 msec 2023-07-18 10:14:39,826 INFO [StoreOpener-17ef7ed6f73f3a8b8148bbc87735c8fa-1] regionserver.HStore(310): Store=17ef7ed6f73f3a8b8148bbc87735c8fa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:39,826 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46f9f3bf090854793e798f4237b37d11, ASSIGN in 458 msec 2023-07-18 10:14:39,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:39,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:39,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:39,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:39,837 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 17ef7ed6f73f3a8b8148bbc87735c8fa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9766603840, jitterRate=-0.0904141366481781}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:39,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 17ef7ed6f73f3a8b8148bbc87735c8fa: 2023-07-18 10:14:39,838 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa., pid=55, masterSystemTime=1689675279684 2023-07-18 10:14:39,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:39,842 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:39,843 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=17ef7ed6f73f3a8b8148bbc87735c8fa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:39,843 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675279842"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675279842"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675279842"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675279842"}]},"ts":"1689675279842"} 2023-07-18 10:14:39,852 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=51 2023-07-18 10:14:39,852 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=51, state=SUCCESS; OpenRegionProcedure 17ef7ed6f73f3a8b8148bbc87735c8fa, server=jenkins-hbase4.apache.org,35633,1689675275991 in 314 msec 2023-07-18 10:14:39,854 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=49 2023-07-18 10:14:39,854 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ef7ed6f73f3a8b8148bbc87735c8fa, ASSIGN in 486 msec 2023-07-18 10:14:39,855 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675279854"}]},"ts":"1689675279854"} 2023-07-18 10:14:39,857 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 10:14:39,859 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-18 10:14:39,862 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 838 msec 2023-07-18 10:14:40,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-18 10:14:40,143 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-18 10:14:40,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:40,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:40,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:40,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:40,146 INFO [Listener at localhost/45689] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-18 10:14:40,152 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675280152"}]},"ts":"1689675280152"} 2023-07-18 10:14:40,154 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 10:14:40,156 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 10:14:40,157 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46f9f3bf090854793e798f4237b37d11, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ef7ed6f73f3a8b8148bbc87735c8fa, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc9945fdc8a63ea595d9821857e656bf, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6b5065947d11b27bc8d42108b2407b, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8a374203e1ad01005820f1a69d8a29a, UNASSIGN}] 2023-07-18 10:14:40,159 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6b5065947d11b27bc8d42108b2407b, UNASSIGN 2023-07-18 10:14:40,159 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc9945fdc8a63ea595d9821857e656bf, UNASSIGN 2023-07-18 10:14:40,159 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ef7ed6f73f3a8b8148bbc87735c8fa, UNASSIGN 2023-07-18 10:14:40,159 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46f9f3bf090854793e798f4237b37d11, UNASSIGN 2023-07-18 10:14:40,160 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8a374203e1ad01005820f1a69d8a29a, UNASSIGN 2023-07-18 10:14:40,160 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=4c6b5065947d11b27bc8d42108b2407b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:40,160 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=cc9945fdc8a63ea595d9821857e656bf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:40,160 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=17ef7ed6f73f3a8b8148bbc87735c8fa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:40,160 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=46f9f3bf090854793e798f4237b37d11, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:40,161 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675280160"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675280160"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675280160"}]},"ts":"1689675280160"} 2023-07-18 10:14:40,161 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675280160"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675280160"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675280160"}]},"ts":"1689675280160"} 2023-07-18 10:14:40,161 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675280160"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675280160"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675280160"}]},"ts":"1689675280160"} 2023-07-18 10:14:40,161 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675280160"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675280160"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675280160"}]},"ts":"1689675280160"} 2023-07-18 10:14:40,161 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=c8a374203e1ad01005820f1a69d8a29a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:40,161 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675280161"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675280161"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675280161"}]},"ts":"1689675280161"} 2023-07-18 10:14:40,162 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=63, state=RUNNABLE; CloseRegionProcedure cc9945fdc8a63ea595d9821857e656bf, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:40,164 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=61, state=RUNNABLE; CloseRegionProcedure 46f9f3bf090854793e798f4237b37d11, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:40,165 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=64, state=RUNNABLE; CloseRegionProcedure 4c6b5065947d11b27bc8d42108b2407b, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:40,167 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=62, state=RUNNABLE; CloseRegionProcedure 17ef7ed6f73f3a8b8148bbc87735c8fa, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:40,168 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=65, state=RUNNABLE; CloseRegionProcedure c8a374203e1ad01005820f1a69d8a29a, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:40,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-18 10:14:40,307 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 10:14:40,317 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:40,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc9945fdc8a63ea595d9821857e656bf, disabling compactions & flushes 2023-07-18 10:14:40,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:40,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:40,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. after waiting 0 ms 2023-07-18 10:14:40,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:40,319 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:40,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 17ef7ed6f73f3a8b8148bbc87735c8fa, disabling compactions & flushes 2023-07-18 10:14:40,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:40,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:40,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. after waiting 0 ms 2023-07-18 10:14:40,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:40,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:40,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:40,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa. 2023-07-18 10:14:40,332 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 17ef7ed6f73f3a8b8148bbc87735c8fa: 2023-07-18 10:14:40,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf. 2023-07-18 10:14:40,332 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc9945fdc8a63ea595d9821857e656bf: 2023-07-18 10:14:40,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:40,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:40,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 46f9f3bf090854793e798f4237b37d11, disabling compactions & flushes 2023-07-18 10:14:40,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:40,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:40,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. after waiting 0 ms 2023-07-18 10:14:40,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:40,336 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=17ef7ed6f73f3a8b8148bbc87735c8fa, regionState=CLOSED 2023-07-18 10:14:40,336 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675280336"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675280336"}]},"ts":"1689675280336"} 2023-07-18 10:14:40,336 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:40,336 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:40,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c8a374203e1ad01005820f1a69d8a29a, disabling compactions & flushes 2023-07-18 10:14:40,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:40,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:40,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. after waiting 0 ms 2023-07-18 10:14:40,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:40,339 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=cc9945fdc8a63ea595d9821857e656bf, regionState=CLOSED 2023-07-18 10:14:40,339 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675280339"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675280339"}]},"ts":"1689675280339"} 2023-07-18 10:14:40,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:40,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:40,353 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a. 2023-07-18 10:14:40,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c8a374203e1ad01005820f1a69d8a29a: 2023-07-18 10:14:40,354 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11. 2023-07-18 10:14:40,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 46f9f3bf090854793e798f4237b37d11: 2023-07-18 10:14:40,357 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:40,357 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=62 2023-07-18 10:14:40,357 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=62, state=SUCCESS; CloseRegionProcedure 17ef7ed6f73f3a8b8148bbc87735c8fa, server=jenkins-hbase4.apache.org,35633,1689675275991 in 177 msec 2023-07-18 10:14:40,358 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:40,358 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:40,358 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=c8a374203e1ad01005820f1a69d8a29a, regionState=CLOSED 2023-07-18 10:14:40,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4c6b5065947d11b27bc8d42108b2407b, disabling compactions & flushes 2023-07-18 10:14:40,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:40,359 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675280358"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675280358"}]},"ts":"1689675280358"} 2023-07-18 10:14:40,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:40,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. after waiting 0 ms 2023-07-18 10:14:40,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:40,360 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=63 2023-07-18 10:14:40,360 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=46f9f3bf090854793e798f4237b37d11, regionState=CLOSED 2023-07-18 10:14:40,360 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; CloseRegionProcedure cc9945fdc8a63ea595d9821857e656bf, server=jenkins-hbase4.apache.org,40033,1689675272048 in 190 msec 2023-07-18 10:14:40,360 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689675280360"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675280360"}]},"ts":"1689675280360"} 2023-07-18 10:14:40,363 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=17ef7ed6f73f3a8b8148bbc87735c8fa, UNASSIGN in 201 msec 2023-07-18 10:14:40,364 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cc9945fdc8a63ea595d9821857e656bf, UNASSIGN in 204 msec 2023-07-18 10:14:40,367 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=65 2023-07-18 10:14:40,367 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=65, state=SUCCESS; CloseRegionProcedure c8a374203e1ad01005820f1a69d8a29a, server=jenkins-hbase4.apache.org,40033,1689675272048 in 195 msec 2023-07-18 10:14:40,368 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=61 2023-07-18 10:14:40,368 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=61, state=SUCCESS; CloseRegionProcedure 46f9f3bf090854793e798f4237b37d11, server=jenkins-hbase4.apache.org,35633,1689675275991 in 202 msec 2023-07-18 10:14:40,369 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c8a374203e1ad01005820f1a69d8a29a, UNASSIGN in 211 msec 2023-07-18 10:14:40,370 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46f9f3bf090854793e798f4237b37d11, UNASSIGN in 212 msec 2023-07-18 10:14:40,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:40,378 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b. 2023-07-18 10:14:40,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4c6b5065947d11b27bc8d42108b2407b: 2023-07-18 10:14:40,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:40,383 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=4c6b5065947d11b27bc8d42108b2407b, regionState=CLOSED 2023-07-18 10:14:40,383 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689675280382"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675280382"}]},"ts":"1689675280382"} 2023-07-18 10:14:40,388 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=64 2023-07-18 10:14:40,388 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=64, state=SUCCESS; CloseRegionProcedure 4c6b5065947d11b27bc8d42108b2407b, server=jenkins-hbase4.apache.org,35633,1689675275991 in 220 msec 2023-07-18 10:14:40,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=60 2023-07-18 10:14:40,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6b5065947d11b27bc8d42108b2407b, UNASSIGN in 232 msec 2023-07-18 10:14:40,391 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675280391"}]},"ts":"1689675280391"} 2023-07-18 10:14:40,393 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 10:14:40,395 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 10:14:40,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 250 msec 2023-07-18 10:14:40,406 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 10:14:40,407 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 10:14:40,407 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:14:40,408 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 10:14:40,408 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 10:14:40,408 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 10:14:40,411 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 10:14:40,412 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 10:14:40,413 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 10:14:40,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-18 10:14:40,456 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-18 10:14:40,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,478 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_125047047' 2023-07-18 10:14:40,480 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:40,492 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:40,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,492 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:40,492 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:40,492 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:40,492 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:40,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,497 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a/recovered.edits] 2023-07-18 10:14:40,498 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11/recovered.edits] 2023-07-18 10:14:40,498 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa/recovered.edits] 2023-07-18 10:14:40,498 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf/recovered.edits] 2023-07-18 10:14:40,498 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b/recovered.edits] 2023-07-18 10:14:40,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:40,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-18 10:14:40,520 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa/recovered.edits/4.seqid 2023-07-18 10:14:40,520 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a/recovered.edits/4.seqid 2023-07-18 10:14:40,521 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf/recovered.edits/4.seqid 2023-07-18 10:14:40,522 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c8a374203e1ad01005820f1a69d8a29a 2023-07-18 10:14:40,523 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11/recovered.edits/4.seqid 2023-07-18 10:14:40,525 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b/recovered.edits/4.seqid 2023-07-18 10:14:40,525 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/17ef7ed6f73f3a8b8148bbc87735c8fa 2023-07-18 10:14:40,526 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cc9945fdc8a63ea595d9821857e656bf 2023-07-18 10:14:40,526 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46f9f3bf090854793e798f4237b37d11 2023-07-18 10:14:40,526 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6b5065947d11b27bc8d42108b2407b 2023-07-18 10:14:40,526 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 10:14:40,529 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,536 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 10:14:40,546 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 10:14:40,552 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,552 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 10:14:40,552 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675280552"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:40,553 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675280552"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:40,553 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675280552"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:40,553 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675280552"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:40,553 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675280552"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:40,556 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 10:14:40,556 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 46f9f3bf090854793e798f4237b37d11, NAME => 'Group_testTableMoveTruncateAndDrop,,1689675279091.46f9f3bf090854793e798f4237b37d11.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 17ef7ed6f73f3a8b8148bbc87735c8fa, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689675279091.17ef7ed6f73f3a8b8148bbc87735c8fa.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => cc9945fdc8a63ea595d9821857e656bf, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689675279091.cc9945fdc8a63ea595d9821857e656bf.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 4c6b5065947d11b27bc8d42108b2407b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689675279091.4c6b5065947d11b27bc8d42108b2407b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => c8a374203e1ad01005820f1a69d8a29a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689675279091.c8a374203e1ad01005820f1a69d8a29a.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 10:14:40,556 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 10:14:40,556 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689675280556"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:40,558 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 10:14:40,561 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 10:14:40,563 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 95 msec 2023-07-18 10:14:40,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-18 10:14:40,613 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-18 10:14:40,614 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:40,615 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:40,620 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,621 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,622 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:40,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:40,622 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:40,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup default 2023-07-18 10:14:40,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:40,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:40,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_125047047, current retry=0 2023-07-18 10:14:40,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048] are moved back to Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:40,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_125047047 => default 2023-07-18 10:14:40,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:40,635 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_125047047 2023-07-18 10:14:40,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 10:14:40,646 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:40,647 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:40,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:40,647 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:40,648 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:40,649 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:40,650 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:40,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:40,656 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:40,661 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:40,663 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:40,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:40,674 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:40,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,680 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:40,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:40,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676480680, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:40,682 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:40,683 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:40,684 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,684 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,684 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:40,685 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:40,685 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:40,713 INFO [Listener at localhost/45689] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=495 (was 421) Potentially hanging thread: RS:3;jenkins-hbase4:35633 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1634953708_17 at /127.0.0.1:50556 [Receiving block BP-1078778366-172.31.14.131-1689675266234:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1078778366-172.31.14.131-1689675266234:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1634953708_17 at /127.0.0.1:33096 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1107225457-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-164845421_17 at /127.0.0.1:50604 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:38869 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1634953708_17 at /127.0.0.1:33246 [Receiving block BP-1078778366-172.31.14.131-1689675266234:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53154@0x39f9e47b-SendThread(127.0.0.1:53154) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53154@0x39f9e47b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1078778366-172.31.14.131-1689675266234:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796-prefix:jenkins-hbase4.apache.org,35633,1689675275991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1107225457-635-acceptor-0@90f9bd1-ServerConnector@191c4c74{HTTP/1.1, (http/1.1)}{0.0.0.0:44927} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-673a46c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35633Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1107225457-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1107225457-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1107225457-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:35633-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1107225457-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1078778366-172.31.14.131-1689675266234:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1107225457-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53154@0x39f9e47b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1260679520.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:38869 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-164845421_17 at /127.0.0.1:34506 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1107225457-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1634953708_17 at /127.0.0.1:34518 [Receiving block BP-1078778366-172.31.14.131-1689675266234:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=768 (was 673) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=495 (was 504), ProcessCount=173 (was 173), AvailableMemoryMB=3005 (was 3150) 2023-07-18 10:14:40,731 INFO [Listener at localhost/45689] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=495, OpenFileDescriptor=768, MaxFileDescriptor=60000, SystemLoadAverage=495, ProcessCount=173, AvailableMemoryMB=3005 2023-07-18 10:14:40,731 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-18 10:14:40,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:40,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:40,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:40,740 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:40,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:40,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:40,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:40,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:40,757 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:40,758 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:40,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:40,764 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:40,767 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,768 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,770 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:40,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:40,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676480770, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:40,771 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:40,773 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:40,774 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,774 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,775 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:40,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:40,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:40,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-18 10:14:40,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:40,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:40186 deadline: 1689676480777, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 10:14:40,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-18 10:14:40,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:40,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:40186 deadline: 1689676480778, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 10:14:40,780 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-18 10:14:40,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:40,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:40186 deadline: 1689676480780, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 10:14:40,781 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-18 10:14:40,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-18 10:14:40,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:40,789 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:40,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,799 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:40,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:40,799 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:40,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:40,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:40,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-18 10:14:40,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 10:14:40,807 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:40,808 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:40,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:40,808 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:40,809 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:40,809 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:40,810 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:40,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:40,816 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:40,821 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:40,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:40,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:40,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:40,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,855 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:40,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:40,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676480855, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:40,856 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:40,858 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:40,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,859 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:40,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:40,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:40,880 INFO [Listener at localhost/45689] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=498 (was 495) Potentially hanging thread: hconnection-0x297c531f-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=768 (was 768), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=495 (was 495), ProcessCount=173 (was 173), AvailableMemoryMB=3003 (was 3005) 2023-07-18 10:14:40,897 INFO [Listener at localhost/45689] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=498, OpenFileDescriptor=768, MaxFileDescriptor=60000, SystemLoadAverage=495, ProcessCount=173, AvailableMemoryMB=3003 2023-07-18 10:14:40,897 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-18 10:14:40,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,903 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:40,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:40,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:40,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:40,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:40,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:40,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:40,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:40,916 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:40,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:40,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:40,923 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:40,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,928 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:40,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:40,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676480928, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:40,929 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:40,931 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:40,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,932 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:40,933 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:40,933 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:40,934 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,934 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,935 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:40,935 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:40,936 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-18 10:14:40,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 10:14:40,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:40,943 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:40,946 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:40,946 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:40,948 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup bar 2023-07-18 10:14:40,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:40,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 10:14:40,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:40,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:40,953 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(238): Moving server region 6fb842bd011abbe63e3755e261be5bdf, which do not belong to RSGroup bar 2023-07-18 10:14:40,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=6fb842bd011abbe63e3755e261be5bdf, REOPEN/MOVE 2023-07-18 10:14:40,955 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(238): Moving server region c279e5fb45e4dd6ee6ca1bf14c1ea18e, which do not belong to RSGroup bar 2023-07-18 10:14:40,956 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=6fb842bd011abbe63e3755e261be5bdf, REOPEN/MOVE 2023-07-18 10:14:40,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=c279e5fb45e4dd6ee6ca1bf14c1ea18e, REOPEN/MOVE 2023-07-18 10:14:40,957 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=6fb842bd011abbe63e3755e261be5bdf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:40,957 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-18 10:14:40,958 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=c279e5fb45e4dd6ee6ca1bf14c1ea18e, REOPEN/MOVE 2023-07-18 10:14:40,958 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675280957"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675280957"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675280957"}]},"ts":"1689675280957"} 2023-07-18 10:14:40,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 10:14:40,959 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=c279e5fb45e4dd6ee6ca1bf14c1ea18e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:40,960 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 10:14:40,960 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675280959"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675280959"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675280959"}]},"ts":"1689675280959"} 2023-07-18 10:14:40,959 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-18 10:14:40,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=72, state=RUNNABLE; CloseRegionProcedure 6fb842bd011abbe63e3755e261be5bdf, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:40,961 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40931,1689675272348, state=CLOSING 2023-07-18 10:14:40,963 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure c279e5fb45e4dd6ee6ca1bf14c1ea18e, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:40,963 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 10:14:40,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=74, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:40,964 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 10:14:40,964 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure c279e5fb45e4dd6ee6ca1bf14c1ea18e, server=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:41,116 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:41,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6fb842bd011abbe63e3755e261be5bdf, disabling compactions & flushes 2023-07-18 10:14:41,117 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-18 10:14:41,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:41,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:41,118 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 10:14:41,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. after waiting 0 ms 2023-07-18 10:14:41,118 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 10:14:41,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:41,118 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 10:14:41,118 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 10:14:41,118 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 10:14:41,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6fb842bd011abbe63e3755e261be5bdf 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-18 10:14:41,119 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=40.81 KB heapSize=63.08 KB 2023-07-18 10:14:41,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/.tmp/info/811e832c233f4df1add4aa6f69ce3589 2023-07-18 10:14:41,258 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.75 KB at sequenceid=92 (bloomFilter=false), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/info/a5eac33106ae4735beb769d53811e8c2 2023-07-18 10:14:41,291 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a5eac33106ae4735beb769d53811e8c2 2023-07-18 10:14:41,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/.tmp/info/811e832c233f4df1add4aa6f69ce3589 as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/info/811e832c233f4df1add4aa6f69ce3589 2023-07-18 10:14:41,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/info/811e832c233f4df1add4aa6f69ce3589, entries=2, sequenceid=6, filesize=4.8 K 2023-07-18 10:14:41,342 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 6fb842bd011abbe63e3755e261be5bdf in 223ms, sequenceid=6, compaction requested=false 2023-07-18 10:14:41,360 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=92 (bloomFilter=false), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/rep_barrier/e72a444145a9427d8a203ce0c7d60483 2023-07-18 10:14:41,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-18 10:14:41,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:41,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6fb842bd011abbe63e3755e261be5bdf: 2023-07-18 10:14:41,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6fb842bd011abbe63e3755e261be5bdf move to jenkins-hbase4.apache.org,42163,1689675271845 record at close sequenceid=6 2023-07-18 10:14:41,370 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e72a444145a9427d8a203ce0c7d60483 2023-07-18 10:14:41,373 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=75, ppid=72, state=RUNNABLE; CloseRegionProcedure 6fb842bd011abbe63e3755e261be5bdf, server=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:41,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:41,416 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=92 (bloomFilter=false), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/table/2ac0558fb24e4d71a5d096a835004dab 2023-07-18 10:14:41,430 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2ac0558fb24e4d71a5d096a835004dab 2023-07-18 10:14:41,431 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/info/a5eac33106ae4735beb769d53811e8c2 as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info/a5eac33106ae4735beb769d53811e8c2 2023-07-18 10:14:41,441 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a5eac33106ae4735beb769d53811e8c2 2023-07-18 10:14:41,441 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info/a5eac33106ae4735beb769d53811e8c2, entries=42, sequenceid=92, filesize=9.7 K 2023-07-18 10:14:41,443 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/rep_barrier/e72a444145a9427d8a203ce0c7d60483 as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier/e72a444145a9427d8a203ce0c7d60483 2023-07-18 10:14:41,451 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e72a444145a9427d8a203ce0c7d60483 2023-07-18 10:14:41,451 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier/e72a444145a9427d8a203ce0c7d60483, entries=10, sequenceid=92, filesize=6.1 K 2023-07-18 10:14:41,452 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/table/2ac0558fb24e4d71a5d096a835004dab as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table/2ac0558fb24e4d71a5d096a835004dab 2023-07-18 10:14:41,460 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2ac0558fb24e4d71a5d096a835004dab 2023-07-18 10:14:41,460 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table/2ac0558fb24e4d71a5d096a835004dab, entries=15, sequenceid=92, filesize=6.2 K 2023-07-18 10:14:41,462 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~40.81 KB/41791, heapSize ~63.03 KB/64544, currentSize=0 B/0 for 1588230740 in 343ms, sequenceid=92, compaction requested=false 2023-07-18 10:14:41,495 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/recovered.edits/95.seqid, newMaxSeqId=95, maxSeqId=1 2023-07-18 10:14:41,499 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:14:41,500 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 10:14:41,500 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 10:14:41,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,42163,1689675271845 record at close sequenceid=92 2023-07-18 10:14:41,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-18 10:14:41,513 WARN [PEWorker-5] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-18 10:14:41,516 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=74 2023-07-18 10:14:41,516 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=74, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40931,1689675272348 in 550 msec 2023-07-18 10:14:41,516 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:41,667 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42163,1689675271845, state=OPENING 2023-07-18 10:14:41,670 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 10:14:41,670 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 10:14:41,670 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=74, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:41,828 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 10:14:41,828 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:14:41,830 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42163%2C1689675271845.meta, suffix=.meta, logDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,42163,1689675271845, archiveDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs, maxLogs=32 2023-07-18 10:14:41,855 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK] 2023-07-18 10:14:41,855 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK] 2023-07-18 10:14:41,858 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK] 2023-07-18 10:14:41,862 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/WALs/jenkins-hbase4.apache.org,42163,1689675271845/jenkins-hbase4.apache.org%2C42163%2C1689675271845.meta.1689675281832.meta 2023-07-18 10:14:41,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK], DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK], DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK]] 2023-07-18 10:14:41,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:41,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 10:14:41,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 10:14:41,863 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 10:14:41,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 10:14:41,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:41,864 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 10:14:41,864 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 10:14:41,865 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 10:14:41,866 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info 2023-07-18 10:14:41,866 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info 2023-07-18 10:14:41,867 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 10:14:41,876 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a5eac33106ae4735beb769d53811e8c2 2023-07-18 10:14:41,876 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info/a5eac33106ae4735beb769d53811e8c2 2023-07-18 10:14:41,876 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:41,876 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 10:14:41,877 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:14:41,878 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:14:41,878 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 10:14:41,885 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e72a444145a9427d8a203ce0c7d60483 2023-07-18 10:14:41,885 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier/e72a444145a9427d8a203ce0c7d60483 2023-07-18 10:14:41,886 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:41,886 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 10:14:41,887 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table 2023-07-18 10:14:41,887 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table 2023-07-18 10:14:41,887 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 10:14:41,894 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2ac0558fb24e4d71a5d096a835004dab 2023-07-18 10:14:41,894 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table/2ac0558fb24e4d71a5d096a835004dab 2023-07-18 10:14:41,894 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:41,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740 2023-07-18 10:14:41,896 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740 2023-07-18 10:14:41,898 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 10:14:41,899 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 10:14:41,900 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=96; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11630159520, jitterRate=0.08314301073551178}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 10:14:41,900 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 10:14:41,901 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=78, masterSystemTime=1689675281822 2023-07-18 10:14:41,903 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 10:14:41,903 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 10:14:41,903 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42163,1689675271845, state=OPEN 2023-07-18 10:14:41,905 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 10:14:41,905 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 10:14:41,907 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=6fb842bd011abbe63e3755e261be5bdf, regionState=CLOSED 2023-07-18 10:14:41,907 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675281907"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675281907"}]},"ts":"1689675281907"} 2023-07-18 10:14:41,908 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40931] ipc.CallRunner(144): callId: 178 service: ClientService methodName: Mutate size: 218 connection: 172.31.14.131:50346 deadline: 1689675341908, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42163 startCode=1689675271845. As of locationSeqNum=92. 2023-07-18 10:14:41,909 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=74 2023-07-18 10:14:41,909 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=74, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42163,1689675271845 in 235 msec 2023-07-18 10:14:41,910 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 951 msec 2023-07-18 10:14:41,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure.ProcedureSyncWait(216): waitFor pid=72 2023-07-18 10:14:42,010 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:14:42,011 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33870, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:14:42,015 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=72 2023-07-18 10:14:42,015 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=72, state=SUCCESS; CloseRegionProcedure 6fb842bd011abbe63e3755e261be5bdf, server=jenkins-hbase4.apache.org,40931,1689675272348 in 1.0520 sec 2023-07-18 10:14:42,016 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6fb842bd011abbe63e3755e261be5bdf, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:42,058 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:42,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c279e5fb45e4dd6ee6ca1bf14c1ea18e, disabling compactions & flushes 2023-07-18 10:14:42,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:42,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:42,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. after waiting 0 ms 2023-07-18 10:14:42,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:42,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c279e5fb45e4dd6ee6ca1bf14c1ea18e 1/1 column families, dataSize=6.36 KB heapSize=10.50 KB 2023-07-18 10:14:42,074 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.36 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/.tmp/m/fa7e6a39bf04464a89ac957437b96721 2023-07-18 10:14:42,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fa7e6a39bf04464a89ac957437b96721 2023-07-18 10:14:42,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/.tmp/m/fa7e6a39bf04464a89ac957437b96721 as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/m/fa7e6a39bf04464a89ac957437b96721 2023-07-18 10:14:42,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fa7e6a39bf04464a89ac957437b96721 2023-07-18 10:14:42,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/m/fa7e6a39bf04464a89ac957437b96721, entries=9, sequenceid=26, filesize=5.5 K 2023-07-18 10:14:42,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.36 KB/6514, heapSize ~10.48 KB/10736, currentSize=0 B/0 for c279e5fb45e4dd6ee6ca1bf14c1ea18e in 31ms, sequenceid=26, compaction requested=false 2023-07-18 10:14:42,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-18 10:14:42,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:14:42,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:42,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c279e5fb45e4dd6ee6ca1bf14c1ea18e: 2023-07-18 10:14:42,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c279e5fb45e4dd6ee6ca1bf14c1ea18e move to jenkins-hbase4.apache.org,42163,1689675271845 record at close sequenceid=26 2023-07-18 10:14:42,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:42,102 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=c279e5fb45e4dd6ee6ca1bf14c1ea18e, regionState=CLOSED 2023-07-18 10:14:42,102 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675282102"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675282102"}]},"ts":"1689675282102"} 2023-07-18 10:14:42,106 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=73 2023-07-18 10:14:42,106 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=73, state=SUCCESS; CloseRegionProcedure c279e5fb45e4dd6ee6ca1bf14c1ea18e, server=jenkins-hbase4.apache.org,40931,1689675272348 in 1.1420 sec 2023-07-18 10:14:42,106 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=c279e5fb45e4dd6ee6ca1bf14c1ea18e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:42,107 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=6fb842bd011abbe63e3755e261be5bdf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:42,107 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675282106"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675282106"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675282106"}]},"ts":"1689675282106"} 2023-07-18 10:14:42,107 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=c279e5fb45e4dd6ee6ca1bf14c1ea18e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:42,107 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675282107"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675282107"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675282107"}]},"ts":"1689675282107"} 2023-07-18 10:14:42,111 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=72, state=RUNNABLE; OpenRegionProcedure 6fb842bd011abbe63e3755e261be5bdf, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:42,112 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=73, state=RUNNABLE; OpenRegionProcedure c279e5fb45e4dd6ee6ca1bf14c1ea18e, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:42,267 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:42,267 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6fb842bd011abbe63e3755e261be5bdf, NAME => 'hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:42,267 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:42,267 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:42,267 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:42,267 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:42,269 INFO [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:42,270 DEBUG [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/info 2023-07-18 10:14:42,270 DEBUG [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/info 2023-07-18 10:14:42,270 INFO [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6fb842bd011abbe63e3755e261be5bdf columnFamilyName info 2023-07-18 10:14:42,277 DEBUG [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] regionserver.HStore(539): loaded hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/info/811e832c233f4df1add4aa6f69ce3589 2023-07-18 10:14:42,278 INFO [StoreOpener-6fb842bd011abbe63e3755e261be5bdf-1] regionserver.HStore(310): Store=6fb842bd011abbe63e3755e261be5bdf/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:42,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:42,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:42,283 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:42,284 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6fb842bd011abbe63e3755e261be5bdf; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9460240000, jitterRate=-0.11894649267196655}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:42,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6fb842bd011abbe63e3755e261be5bdf: 2023-07-18 10:14:42,284 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf., pid=79, masterSystemTime=1689675282262 2023-07-18 10:14:42,286 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:42,286 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:42,286 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:42,286 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c279e5fb45e4dd6ee6ca1bf14c1ea18e, NAME => 'hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:42,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 10:14:42,287 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=6fb842bd011abbe63e3755e261be5bdf, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:42,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. service=MultiRowMutationService 2023-07-18 10:14:42,287 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675282286"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675282286"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675282286"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675282286"}]},"ts":"1689675282286"} 2023-07-18 10:14:42,287 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 10:14:42,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:42,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:42,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:42,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:42,289 INFO [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:42,290 DEBUG [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/m 2023-07-18 10:14:42,290 DEBUG [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/m 2023-07-18 10:14:42,290 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=72 2023-07-18 10:14:42,290 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=72, state=SUCCESS; OpenRegionProcedure 6fb842bd011abbe63e3755e261be5bdf, server=jenkins-hbase4.apache.org,42163,1689675271845 in 180 msec 2023-07-18 10:14:42,290 INFO [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c279e5fb45e4dd6ee6ca1bf14c1ea18e columnFamilyName m 2023-07-18 10:14:42,291 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6fb842bd011abbe63e3755e261be5bdf, REOPEN/MOVE in 1.3360 sec 2023-07-18 10:14:42,297 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fa7e6a39bf04464a89ac957437b96721 2023-07-18 10:14:42,297 DEBUG [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] regionserver.HStore(539): loaded hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/m/fa7e6a39bf04464a89ac957437b96721 2023-07-18 10:14:42,298 INFO [StoreOpener-c279e5fb45e4dd6ee6ca1bf14c1ea18e-1] regionserver.HStore(310): Store=c279e5fb45e4dd6ee6ca1bf14c1ea18e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:42,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:42,300 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:42,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:42,304 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c279e5fb45e4dd6ee6ca1bf14c1ea18e; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@f1291ec, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:42,304 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c279e5fb45e4dd6ee6ca1bf14c1ea18e: 2023-07-18 10:14:42,305 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e., pid=80, masterSystemTime=1689675282262 2023-07-18 10:14:42,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:42,307 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:42,307 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=c279e5fb45e4dd6ee6ca1bf14c1ea18e, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:42,307 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675282307"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675282307"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675282307"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675282307"}]},"ts":"1689675282307"} 2023-07-18 10:14:42,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=73 2023-07-18 10:14:42,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=73, state=SUCCESS; OpenRegionProcedure c279e5fb45e4dd6ee6ca1bf14c1ea18e, server=jenkins-hbase4.apache.org,42163,1689675271845 in 197 msec 2023-07-18 10:14:42,314 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=c279e5fb45e4dd6ee6ca1bf14c1ea18e, REOPEN/MOVE in 1.3550 sec 2023-07-18 10:14:42,961 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048, jenkins-hbase4.apache.org,40931,1689675272348] are moved back to default 2023-07-18 10:14:42,961 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-18 10:14:42,961 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:42,962 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40931] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:50352 deadline: 1689675342962, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42163 startCode=1689675271845. As of locationSeqNum=26. 2023-07-18 10:14:43,066 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40931] ipc.CallRunner(144): callId: 12 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:50352 deadline: 1689675343066, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42163 startCode=1689675271845. As of locationSeqNum=92. 2023-07-18 10:14:43,168 DEBUG [hconnection-0x297c531f-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:14:43,177 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33884, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:14:43,198 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:43,198 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:43,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 10:14:43,201 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:43,203 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:43,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-18 10:14:43,206 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:14:43,206 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-18 10:14:43,207 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40931] ipc.CallRunner(144): callId: 187 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:50346 deadline: 1689675343207, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42163 startCode=1689675271845. As of locationSeqNum=26. 2023-07-18 10:14:43,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 10:14:43,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 10:14:43,313 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:43,313 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 10:14:43,314 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:43,314 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:43,317 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:14:43,318 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:43,319 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 empty. 2023-07-18 10:14:43,319 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:43,320 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 10:14:43,335 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:43,336 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4b3ab6a1d0babf4877a27d64d891fd04, NAME => 'Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:43,347 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:43,347 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 4b3ab6a1d0babf4877a27d64d891fd04, disabling compactions & flushes 2023-07-18 10:14:43,347 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:43,347 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:43,347 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. after waiting 0 ms 2023-07-18 10:14:43,347 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:43,348 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:43,348 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 4b3ab6a1d0babf4877a27d64d891fd04: 2023-07-18 10:14:43,350 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:14:43,351 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675283351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675283351"}]},"ts":"1689675283351"} 2023-07-18 10:14:43,353 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:14:43,354 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:14:43,354 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675283354"}]},"ts":"1689675283354"} 2023-07-18 10:14:43,355 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-18 10:14:43,365 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, ASSIGN}] 2023-07-18 10:14:43,368 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, ASSIGN 2023-07-18 10:14:43,370 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:43,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 10:14:43,522 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:43,523 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675283522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675283522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675283522"}]},"ts":"1689675283522"} 2023-07-18 10:14:43,525 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:43,684 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:43,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b3ab6a1d0babf4877a27d64d891fd04, NAME => 'Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:43,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:43,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:43,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:43,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:43,687 INFO [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:43,689 DEBUG [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/f 2023-07-18 10:14:43,689 DEBUG [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/f 2023-07-18 10:14:43,689 INFO [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b3ab6a1d0babf4877a27d64d891fd04 columnFamilyName f 2023-07-18 10:14:43,690 INFO [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] regionserver.HStore(310): Store=4b3ab6a1d0babf4877a27d64d891fd04/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:43,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:43,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:43,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:43,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:43,699 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b3ab6a1d0babf4877a27d64d891fd04; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10419642400, jitterRate=-0.029595181345939636}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:43,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b3ab6a1d0babf4877a27d64d891fd04: 2023-07-18 10:14:43,700 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04., pid=83, masterSystemTime=1689675283679 2023-07-18 10:14:43,702 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:43,702 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:43,702 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:43,703 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675283702"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675283702"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675283702"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675283702"}]},"ts":"1689675283702"} 2023-07-18 10:14:43,706 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-18 10:14:43,706 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,42163,1689675271845 in 179 msec 2023-07-18 10:14:43,708 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-18 10:14:43,709 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, ASSIGN in 341 msec 2023-07-18 10:14:43,711 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:14:43,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675283711"}]},"ts":"1689675283711"} 2023-07-18 10:14:43,713 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-18 10:14:43,716 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:14:43,717 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 513 msec 2023-07-18 10:14:43,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 10:14:43,811 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-18 10:14:43,812 DEBUG [Listener at localhost/45689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-18 10:14:43,812 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:43,813 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40931] ipc.CallRunner(144): callId: 275 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:50348 deadline: 1689675343812, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42163 startCode=1689675271845. As of locationSeqNum=92. 2023-07-18 10:14:43,915 DEBUG [hconnection-0x5f7045aa-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:14:43,924 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33898, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:14:43,931 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-18 10:14:43,932 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:43,932 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-18 10:14:43,934 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-18 10:14:43,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:43,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 10:14:43,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:43,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:43,943 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-18 10:14:43,943 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region 4b3ab6a1d0babf4877a27d64d891fd04 to RSGroup bar 2023-07-18 10:14:43,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:43,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:43,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:43,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:43,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 10:14:43,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:43,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, REOPEN/MOVE 2023-07-18 10:14:43,945 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-18 10:14:43,947 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, REOPEN/MOVE 2023-07-18 10:14:43,947 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:43,948 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675283947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675283947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675283947"}]},"ts":"1689675283947"} 2023-07-18 10:14:43,949 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:44,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:44,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b3ab6a1d0babf4877a27d64d891fd04, disabling compactions & flushes 2023-07-18 10:14:44,105 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:44,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:44,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. after waiting 0 ms 2023-07-18 10:14:44,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:44,111 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:44,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:44,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b3ab6a1d0babf4877a27d64d891fd04: 2023-07-18 10:14:44,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4b3ab6a1d0babf4877a27d64d891fd04 move to jenkins-hbase4.apache.org,35633,1689675275991 record at close sequenceid=2 2023-07-18 10:14:44,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:44,115 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=CLOSED 2023-07-18 10:14:44,115 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675284115"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675284115"}]},"ts":"1689675284115"} 2023-07-18 10:14:44,119 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-18 10:14:44,119 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,42163,1689675271845 in 167 msec 2023-07-18 10:14:44,121 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:44,271 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:14:44,272 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:44,272 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675284272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675284272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675284272"}]},"ts":"1689675284272"} 2023-07-18 10:14:44,274 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:44,430 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:44,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b3ab6a1d0babf4877a27d64d891fd04, NAME => 'Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:44,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:44,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:44,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:44,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:44,435 INFO [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:44,437 DEBUG [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/f 2023-07-18 10:14:44,437 DEBUG [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/f 2023-07-18 10:14:44,438 INFO [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b3ab6a1d0babf4877a27d64d891fd04 columnFamilyName f 2023-07-18 10:14:44,438 INFO [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] regionserver.HStore(310): Store=4b3ab6a1d0babf4877a27d64d891fd04/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:44,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:44,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:44,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:44,452 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b3ab6a1d0babf4877a27d64d891fd04; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9668274720, jitterRate=-0.09957174956798553}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:44,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b3ab6a1d0babf4877a27d64d891fd04: 2023-07-18 10:14:44,453 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04., pid=86, masterSystemTime=1689675284426 2023-07-18 10:14:44,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:44,455 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:44,457 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:44,457 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675284457"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675284457"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675284457"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675284457"}]},"ts":"1689675284457"} 2023-07-18 10:14:44,460 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-18 10:14:44,460 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,35633,1689675275991 in 185 msec 2023-07-18 10:14:44,462 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, REOPEN/MOVE in 516 msec 2023-07-18 10:14:44,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-18 10:14:44,946 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-18 10:14:44,946 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:44,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:44,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:44,954 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 10:14:44,954 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:44,955 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 10:14:44,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:44,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:40186 deadline: 1689676484955, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-18 10:14:44,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup default 2023-07-18 10:14:44,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:44,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:40186 deadline: 1689676484956, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-18 10:14:44,959 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-18 10:14:44,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:44,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 10:14:44,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:44,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:44,967 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-18 10:14:44,967 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region 4b3ab6a1d0babf4877a27d64d891fd04 to RSGroup default 2023-07-18 10:14:44,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, REOPEN/MOVE 2023-07-18 10:14:44,969 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 10:14:44,970 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, REOPEN/MOVE 2023-07-18 10:14:44,971 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:44,971 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675284971"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675284971"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675284971"}]},"ts":"1689675284971"} 2023-07-18 10:14:44,975 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:45,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:45,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b3ab6a1d0babf4877a27d64d891fd04, disabling compactions & flushes 2023-07-18 10:14:45,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:45,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:45,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. after waiting 0 ms 2023-07-18 10:14:45,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:45,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:45,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:45,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b3ab6a1d0babf4877a27d64d891fd04: 2023-07-18 10:14:45,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4b3ab6a1d0babf4877a27d64d891fd04 move to jenkins-hbase4.apache.org,42163,1689675271845 record at close sequenceid=5 2023-07-18 10:14:45,149 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:45,150 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=CLOSED 2023-07-18 10:14:45,150 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675285150"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675285150"}]},"ts":"1689675285150"} 2023-07-18 10:14:45,154 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-18 10:14:45,155 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,35633,1689675275991 in 179 msec 2023-07-18 10:14:45,159 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:45,193 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 10:14:45,310 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:45,310 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675285310"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675285310"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675285310"}]},"ts":"1689675285310"} 2023-07-18 10:14:45,313 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:45,471 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:45,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b3ab6a1d0babf4877a27d64d891fd04, NAME => 'Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:45,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:45,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:45,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:45,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:45,475 INFO [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:45,476 DEBUG [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/f 2023-07-18 10:14:45,476 DEBUG [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/f 2023-07-18 10:14:45,477 INFO [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b3ab6a1d0babf4877a27d64d891fd04 columnFamilyName f 2023-07-18 10:14:45,477 INFO [StoreOpener-4b3ab6a1d0babf4877a27d64d891fd04-1] regionserver.HStore(310): Store=4b3ab6a1d0babf4877a27d64d891fd04/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:45,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:45,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:45,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:45,497 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b3ab6a1d0babf4877a27d64d891fd04; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10490394880, jitterRate=-0.02300584316253662}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:45,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b3ab6a1d0babf4877a27d64d891fd04: 2023-07-18 10:14:45,498 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04., pid=89, masterSystemTime=1689675285466 2023-07-18 10:14:45,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:45,500 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:45,501 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:45,501 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675285500"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675285500"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675285500"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675285500"}]},"ts":"1689675285500"} 2023-07-18 10:14:45,505 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-18 10:14:45,505 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,42163,1689675271845 in 190 msec 2023-07-18 10:14:45,507 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, REOPEN/MOVE in 537 msec 2023-07-18 10:14:45,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-18 10:14:45,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-18 10:14:45,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:45,974 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:45,975 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:45,977 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 10:14:45,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:45,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:40186 deadline: 1689676485977, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-18 10:14:45,979 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup default 2023-07-18 10:14:45,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:45,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 10:14:45,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:45,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:45,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-18 10:14:45,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048, jenkins-hbase4.apache.org,40931,1689675272348] are moved back to bar 2023-07-18 10:14:45,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-18 10:14:45,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:45,995 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:45,996 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:45,998 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 10:14:45,999 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40931] ipc.CallRunner(144): callId: 212 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:50346 deadline: 1689675345998, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42163 startCode=1689675271845. As of locationSeqNum=6. 2023-07-18 10:14:46,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:46,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:46,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 10:14:46,116 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:46,120 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,120 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,122 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,122 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,124 INFO [Listener at localhost/45689] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-18 10:14:46,125 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-18 10:14:46,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-18 10:14:46,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 10:14:46,130 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675286130"}]},"ts":"1689675286130"} 2023-07-18 10:14:46,132 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-18 10:14:46,135 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-18 10:14:46,136 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, UNASSIGN}] 2023-07-18 10:14:46,138 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, UNASSIGN 2023-07-18 10:14:46,139 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:46,139 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675286139"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675286139"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675286139"}]},"ts":"1689675286139"} 2023-07-18 10:14:46,143 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:46,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 10:14:46,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:46,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b3ab6a1d0babf4877a27d64d891fd04, disabling compactions & flushes 2023-07-18 10:14:46,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:46,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:46,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. after waiting 0 ms 2023-07-18 10:14:46,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:46,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 10:14:46,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04. 2023-07-18 10:14:46,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b3ab6a1d0babf4877a27d64d891fd04: 2023-07-18 10:14:46,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:46,305 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=4b3ab6a1d0babf4877a27d64d891fd04, regionState=CLOSED 2023-07-18 10:14:46,305 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689675286305"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675286305"}]},"ts":"1689675286305"} 2023-07-18 10:14:46,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-18 10:14:46,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 4b3ab6a1d0babf4877a27d64d891fd04, server=jenkins-hbase4.apache.org,42163,1689675271845 in 164 msec 2023-07-18 10:14:46,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-18 10:14:46,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=4b3ab6a1d0babf4877a27d64d891fd04, UNASSIGN in 172 msec 2023-07-18 10:14:46,310 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675286310"}]},"ts":"1689675286310"} 2023-07-18 10:14:46,312 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-18 10:14:46,314 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-18 10:14:46,316 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 189 msec 2023-07-18 10:14:46,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 10:14:46,433 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-18 10:14:46,434 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-18 10:14:46,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 10:14:46,438 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 10:14:46,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-18 10:14:46,439 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 10:14:46,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:46,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:46,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:46,444 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:46,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 10:14:46,446 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/recovered.edits] 2023-07-18 10:14:46,454 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/recovered.edits/10.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04/recovered.edits/10.seqid 2023-07-18 10:14:46,454 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testFailRemoveGroup/4b3ab6a1d0babf4877a27d64d891fd04 2023-07-18 10:14:46,454 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 10:14:46,458 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 10:14:46,461 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-18 10:14:46,463 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-18 10:14:46,464 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 10:14:46,464 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-18 10:14:46,464 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675286464"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:46,467 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 10:14:46,467 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 4b3ab6a1d0babf4877a27d64d891fd04, NAME => 'Group_testFailRemoveGroup,,1689675283203.4b3ab6a1d0babf4877a27d64d891fd04.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 10:14:46,467 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-18 10:14:46,467 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689675286467"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:46,469 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-18 10:14:46,472 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 10:14:46,476 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 38 msec 2023-07-18 10:14:46,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 10:14:46,547 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-18 10:14:46,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,552 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,553 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:46,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:46,553 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:46,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:46,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:46,563 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:46,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:46,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:46,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:46,578 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:46,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:46,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:46,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:46,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:46,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:46,620 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,621 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:46,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:46,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676486623, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:46,624 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:46,626 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:46,627 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,627 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,628 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:46,628 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:46,629 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:46,650 INFO [Listener at localhost/45689] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=513 (was 498) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-726264595_17 at /127.0.0.1:54166 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-473618993_17 at /127.0.0.1:44648 [Receiving block BP-1078778366-172.31.14.131-1689675266234:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-473618993_17 at /127.0.0.1:50604 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-473618993_17 at /127.0.0.1:44662 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1078778366-172.31.14.131-1689675266234:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796-prefix:jenkins-hbase4.apache.org,42163,1689675271845.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5f7045aa-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1634953708_17 at /127.0.0.1:41930 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1078778366-172.31.14.131-1689675266234:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-473618993_17 at /127.0.0.1:41954 [Receiving block BP-1078778366-172.31.14.131-1689675266234:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1078778366-172.31.14.131-1689675266234:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-473618993_17 at /127.0.0.1:54136 [Receiving block BP-1078778366-172.31.14.131-1689675266234:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=793 (was 768) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=458 (was 495), ProcessCount=173 (was 173), AvailableMemoryMB=3774 (was 3003) - AvailableMemoryMB LEAK? - 2023-07-18 10:14:46,652 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-18 10:14:46,678 INFO [Listener at localhost/45689] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=513, OpenFileDescriptor=793, MaxFileDescriptor=60000, SystemLoadAverage=458, ProcessCount=173, AvailableMemoryMB=3770 2023-07-18 10:14:46,678 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-18 10:14:46,678 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-18 10:14:46,686 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,686 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,687 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:46,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:46,687 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:46,688 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:46,689 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:46,690 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:46,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:46,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:46,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:46,701 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:46,701 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:46,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:46,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:46,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:46,709 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:46,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,725 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:46,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:46,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676486725, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:46,726 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:46,732 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:46,733 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,733 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,733 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:46,734 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:46,734 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:46,735 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:46,735 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:46,736 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_740923107 2023-07-18 10:14:46,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:46,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:46,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_740923107 2023-07-18 10:14:46,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:46,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:46,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35633] to rsgroup Group_testMultiTableMove_740923107 2023-07-18 10:14:46,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:46,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_740923107 2023-07-18 10:14:46,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:46,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:46,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 10:14:46,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991] are moved back to default 2023-07-18 10:14:46,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_740923107 2023-07-18 10:14:46,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:46,765 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:46,765 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:46,768 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_740923107 2023-07-18 10:14:46,768 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:46,770 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:46,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 10:14:46,774 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:14:46,774 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-18 10:14:46,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 10:14:46,777 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:46,777 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_740923107 2023-07-18 10:14:46,778 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:46,778 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:46,786 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:14:46,789 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:46,789 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71 empty. 2023-07-18 10:14:46,790 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:46,790 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 10:14:46,838 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:46,840 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => c5ab42bbbc8c13633597d8400a815d71, NAME => 'GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:46,868 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:46,868 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing c5ab42bbbc8c13633597d8400a815d71, disabling compactions & flushes 2023-07-18 10:14:46,868 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:46,868 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:46,868 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. after waiting 0 ms 2023-07-18 10:14:46,868 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:46,868 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:46,869 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for c5ab42bbbc8c13633597d8400a815d71: 2023-07-18 10:14:46,872 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:14:46,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675286875"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675286875"}]},"ts":"1689675286875"} 2023-07-18 10:14:46,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 10:14:46,881 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:14:46,885 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:14:46,885 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675286885"}]},"ts":"1689675286885"} 2023-07-18 10:14:46,887 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-18 10:14:46,895 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:46,895 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:46,895 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:46,896 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:46,896 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:46,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, ASSIGN}] 2023-07-18 10:14:46,899 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, ASSIGN 2023-07-18 10:14:46,903 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40033,1689675272048; forceNewPlan=false, retain=false 2023-07-18 10:14:47,054 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:14:47,055 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=c5ab42bbbc8c13633597d8400a815d71, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:47,056 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675287055"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675287055"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675287055"}]},"ts":"1689675287055"} 2023-07-18 10:14:47,059 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure c5ab42bbbc8c13633597d8400a815d71, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:47,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 10:14:47,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:47,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c5ab42bbbc8c13633597d8400a815d71, NAME => 'GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:47,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:47,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:47,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:47,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:47,220 INFO [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:47,222 DEBUG [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/f 2023-07-18 10:14:47,222 DEBUG [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/f 2023-07-18 10:14:47,222 INFO [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c5ab42bbbc8c13633597d8400a815d71 columnFamilyName f 2023-07-18 10:14:47,223 INFO [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] regionserver.HStore(310): Store=c5ab42bbbc8c13633597d8400a815d71/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:47,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:47,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:47,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:47,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:47,230 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c5ab42bbbc8c13633597d8400a815d71; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9789065760, jitterRate=-0.08832220733165741}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:47,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c5ab42bbbc8c13633597d8400a815d71: 2023-07-18 10:14:47,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71., pid=96, masterSystemTime=1689675287214 2023-07-18 10:14:47,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:47,233 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:47,234 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=c5ab42bbbc8c13633597d8400a815d71, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:47,234 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675287234"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675287234"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675287234"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675287234"}]},"ts":"1689675287234"} 2023-07-18 10:14:47,240 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-18 10:14:47,240 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure c5ab42bbbc8c13633597d8400a815d71, server=jenkins-hbase4.apache.org,40033,1689675272048 in 179 msec 2023-07-18 10:14:47,408 INFO [AsyncFSWAL-0-hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData-prefix:jenkins-hbase4.apache.org,42907,1689675269765] wal.AbstractFSWAL(1141): Slow sync cost: 166 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33197,DS-0d50409a-8b6d-492c-bf7f-db8c86894d5f,DISK], DatanodeInfoWithStorage[127.0.0.1:44091,DS-f19a9f53-99d6-4507-a0b5-5709798563f1,DISK], DatanodeInfoWithStorage[127.0.0.1:39177,DS-0174ddba-b045-40fa-862f-a107e2de6134,DISK]] 2023-07-18 10:14:47,408 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-18 10:14:47,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 10:14:47,408 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, ASSIGN in 344 msec 2023-07-18 10:14:47,409 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:14:47,410 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675287409"}]},"ts":"1689675287409"} 2023-07-18 10:14:47,411 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-18 10:14:47,413 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:14:47,415 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 643 msec 2023-07-18 10:14:47,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 10:14:47,910 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-18 10:14:47,910 DEBUG [Listener at localhost/45689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-18 10:14:47,910 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:47,918 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-18 10:14:47,918 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:47,918 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-18 10:14:47,920 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:47,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 10:14:47,923 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:14:47,923 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-18 10:14:47,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 10:14:47,926 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:47,926 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_740923107 2023-07-18 10:14:47,927 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:47,927 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:47,931 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:14:47,933 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:47,933 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2 empty. 2023-07-18 10:14:47,934 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:47,934 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 10:14:47,952 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:47,954 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 08460148a3f0ee2c4975a15eedae70f2, NAME => 'GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:47,971 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:47,971 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 08460148a3f0ee2c4975a15eedae70f2, disabling compactions & flushes 2023-07-18 10:14:47,971 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:47,971 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:47,971 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. after waiting 0 ms 2023-07-18 10:14:47,971 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:47,971 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:47,971 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 08460148a3f0ee2c4975a15eedae70f2: 2023-07-18 10:14:47,974 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:14:47,975 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675287975"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675287975"}]},"ts":"1689675287975"} 2023-07-18 10:14:47,976 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:14:47,977 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:14:47,977 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675287977"}]},"ts":"1689675287977"} 2023-07-18 10:14:47,978 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-18 10:14:47,982 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:47,982 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:47,982 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:47,982 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:47,982 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:47,983 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, ASSIGN}] 2023-07-18 10:14:47,984 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, ASSIGN 2023-07-18 10:14:47,990 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40033,1689675272048; forceNewPlan=false, retain=false 2023-07-18 10:14:48,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 10:14:48,140 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:14:48,141 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=08460148a3f0ee2c4975a15eedae70f2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:48,142 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675288141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675288141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675288141"}]},"ts":"1689675288141"} 2023-07-18 10:14:48,143 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 08460148a3f0ee2c4975a15eedae70f2, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:48,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 10:14:48,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:48,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 08460148a3f0ee2c4975a15eedae70f2, NAME => 'GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:48,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:48,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:48,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:48,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:48,301 INFO [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:48,302 DEBUG [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/f 2023-07-18 10:14:48,302 DEBUG [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/f 2023-07-18 10:14:48,303 INFO [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 08460148a3f0ee2c4975a15eedae70f2 columnFamilyName f 2023-07-18 10:14:48,303 INFO [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] regionserver.HStore(310): Store=08460148a3f0ee2c4975a15eedae70f2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:48,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:48,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:48,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:48,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:48,309 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 08460148a3f0ee2c4975a15eedae70f2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10862616160, jitterRate=0.011659964919090271}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:48,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 08460148a3f0ee2c4975a15eedae70f2: 2023-07-18 10:14:48,310 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2., pid=99, masterSystemTime=1689675288295 2023-07-18 10:14:48,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:48,312 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:48,312 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=08460148a3f0ee2c4975a15eedae70f2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:48,312 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675288312"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675288312"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675288312"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675288312"}]},"ts":"1689675288312"} 2023-07-18 10:14:48,316 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-18 10:14:48,316 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 08460148a3f0ee2c4975a15eedae70f2, server=jenkins-hbase4.apache.org,40033,1689675272048 in 171 msec 2023-07-18 10:14:48,318 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-18 10:14:48,318 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, ASSIGN in 334 msec 2023-07-18 10:14:48,319 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:14:48,319 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675288319"}]},"ts":"1689675288319"} 2023-07-18 10:14:48,320 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-18 10:14:48,322 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:14:48,324 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 402 msec 2023-07-18 10:14:48,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 10:14:48,527 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-18 10:14:48,528 DEBUG [Listener at localhost/45689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-18 10:14:48,528 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:48,531 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-18 10:14:48,531 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:48,531 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-18 10:14:48,532 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:48,544 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 10:14:48,544 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:48,545 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 10:14:48,545 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:48,545 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_740923107 2023-07-18 10:14:48,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_740923107 2023-07-18 10:14:48,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:48,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_740923107 2023-07-18 10:14:48,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:48,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:48,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_740923107 2023-07-18 10:14:48,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region 08460148a3f0ee2c4975a15eedae70f2 to RSGroup Group_testMultiTableMove_740923107 2023-07-18 10:14:48,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, REOPEN/MOVE 2023-07-18 10:14:48,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_740923107 2023-07-18 10:14:48,559 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, REOPEN/MOVE 2023-07-18 10:14:48,559 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region c5ab42bbbc8c13633597d8400a815d71 to RSGroup Group_testMultiTableMove_740923107 2023-07-18 10:14:48,560 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=08460148a3f0ee2c4975a15eedae70f2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:48,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, REOPEN/MOVE 2023-07-18 10:14:48,560 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675288560"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675288560"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675288560"}]},"ts":"1689675288560"} 2023-07-18 10:14:48,561 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, REOPEN/MOVE 2023-07-18 10:14:48,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_740923107, current retry=0 2023-07-18 10:14:48,571 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=c5ab42bbbc8c13633597d8400a815d71, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:48,571 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675288570"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675288570"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675288570"}]},"ts":"1689675288570"} 2023-07-18 10:14:48,571 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 08460148a3f0ee2c4975a15eedae70f2, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:48,573 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure c5ab42bbbc8c13633597d8400a815d71, server=jenkins-hbase4.apache.org,40033,1689675272048}] 2023-07-18 10:14:48,724 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:48,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c5ab42bbbc8c13633597d8400a815d71, disabling compactions & flushes 2023-07-18 10:14:48,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:48,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:48,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. after waiting 0 ms 2023-07-18 10:14:48,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:48,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:48,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:48,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c5ab42bbbc8c13633597d8400a815d71: 2023-07-18 10:14:48,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c5ab42bbbc8c13633597d8400a815d71 move to jenkins-hbase4.apache.org,35633,1689675275991 record at close sequenceid=2 2023-07-18 10:14:48,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:48,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:48,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 08460148a3f0ee2c4975a15eedae70f2, disabling compactions & flushes 2023-07-18 10:14:48,733 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:48,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:48,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. after waiting 0 ms 2023-07-18 10:14:48,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:48,734 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=c5ab42bbbc8c13633597d8400a815d71, regionState=CLOSED 2023-07-18 10:14:48,734 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675288733"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675288733"}]},"ts":"1689675288733"} 2023-07-18 10:14:48,737 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-18 10:14:48,737 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure c5ab42bbbc8c13633597d8400a815d71, server=jenkins-hbase4.apache.org,40033,1689675272048 in 162 msec 2023-07-18 10:14:48,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:48,738 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:48,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:48,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 08460148a3f0ee2c4975a15eedae70f2: 2023-07-18 10:14:48,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 08460148a3f0ee2c4975a15eedae70f2 move to jenkins-hbase4.apache.org,35633,1689675275991 record at close sequenceid=2 2023-07-18 10:14:48,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:48,741 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=08460148a3f0ee2c4975a15eedae70f2, regionState=CLOSED 2023-07-18 10:14:48,741 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675288741"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675288741"}]},"ts":"1689675288741"} 2023-07-18 10:14:48,744 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-18 10:14:48,744 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 08460148a3f0ee2c4975a15eedae70f2, server=jenkins-hbase4.apache.org,40033,1689675272048 in 171 msec 2023-07-18 10:14:48,744 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:48,888 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=08460148a3f0ee2c4975a15eedae70f2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:48,888 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=c5ab42bbbc8c13633597d8400a815d71, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:48,888 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675288888"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675288888"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675288888"}]},"ts":"1689675288888"} 2023-07-18 10:14:48,888 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675288888"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675288888"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675288888"}]},"ts":"1689675288888"} 2023-07-18 10:14:48,890 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 08460148a3f0ee2c4975a15eedae70f2, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:48,891 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure c5ab42bbbc8c13633597d8400a815d71, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:49,049 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:49,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c5ab42bbbc8c13633597d8400a815d71, NAME => 'GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:49,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:49,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,062 INFO [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,071 DEBUG [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/f 2023-07-18 10:14:49,071 DEBUG [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/f 2023-07-18 10:14:49,072 INFO [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c5ab42bbbc8c13633597d8400a815d71 columnFamilyName f 2023-07-18 10:14:49,073 INFO [StoreOpener-c5ab42bbbc8c13633597d8400a815d71-1] regionserver.HStore(310): Store=c5ab42bbbc8c13633597d8400a815d71/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:49,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,088 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,092 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c5ab42bbbc8c13633597d8400a815d71; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11716936320, jitterRate=0.09122473001480103}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:49,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c5ab42bbbc8c13633597d8400a815d71: 2023-07-18 10:14:49,093 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71., pid=105, masterSystemTime=1689675289044 2023-07-18 10:14:49,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:49,095 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:49,095 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:49,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 08460148a3f0ee2c4975a15eedae70f2, NAME => 'GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:49,095 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=c5ab42bbbc8c13633597d8400a815d71, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:49,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:49,095 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675289095"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675289095"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675289095"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675289095"}]},"ts":"1689675289095"} 2023-07-18 10:14:49,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:49,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:49,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:49,099 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-18 10:14:49,099 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure c5ab42bbbc8c13633597d8400a815d71, server=jenkins-hbase4.apache.org,35633,1689675275991 in 206 msec 2023-07-18 10:14:49,100 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, REOPEN/MOVE in 540 msec 2023-07-18 10:14:49,102 INFO [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:49,104 DEBUG [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/f 2023-07-18 10:14:49,104 DEBUG [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/f 2023-07-18 10:14:49,104 INFO [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 08460148a3f0ee2c4975a15eedae70f2 columnFamilyName f 2023-07-18 10:14:49,106 INFO [StoreOpener-08460148a3f0ee2c4975a15eedae70f2-1] regionserver.HStore(310): Store=08460148a3f0ee2c4975a15eedae70f2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:49,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:49,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:49,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:49,113 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 08460148a3f0ee2c4975a15eedae70f2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10957312000, jitterRate=0.020479202270507812}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:49,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 08460148a3f0ee2c4975a15eedae70f2: 2023-07-18 10:14:49,114 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2., pid=104, masterSystemTime=1689675289044 2023-07-18 10:14:49,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:49,115 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:49,116 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=08460148a3f0ee2c4975a15eedae70f2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:49,116 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675289116"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675289116"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675289116"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675289116"}]},"ts":"1689675289116"} 2023-07-18 10:14:49,119 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-18 10:14:49,119 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 08460148a3f0ee2c4975a15eedae70f2, server=jenkins-hbase4.apache.org,35633,1689675275991 in 227 msec 2023-07-18 10:14:49,120 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, REOPEN/MOVE in 563 msec 2023-07-18 10:14:49,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-18 10:14:49,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_740923107. 2023-07-18 10:14:49,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:49,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:49,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:49,569 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 10:14:49,569 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:49,570 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 10:14:49,570 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:49,571 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:49,571 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:49,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_740923107 2023-07-18 10:14:49,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:49,574 INFO [Listener at localhost/45689] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-18 10:14:49,574 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-18 10:14:49,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 10:14:49,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 10:14:49,582 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675289582"}]},"ts":"1689675289582"} 2023-07-18 10:14:49,584 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-18 10:14:49,586 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-18 10:14:49,590 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, UNASSIGN}] 2023-07-18 10:14:49,592 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, UNASSIGN 2023-07-18 10:14:49,594 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=c5ab42bbbc8c13633597d8400a815d71, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:49,594 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675289594"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675289594"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675289594"}]},"ts":"1689675289594"} 2023-07-18 10:14:49,596 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure c5ab42bbbc8c13633597d8400a815d71, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:49,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 10:14:49,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c5ab42bbbc8c13633597d8400a815d71, disabling compactions & flushes 2023-07-18 10:14:49,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:49,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:49,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. after waiting 0 ms 2023-07-18 10:14:49,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:49,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:49,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71. 2023-07-18 10:14:49,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c5ab42bbbc8c13633597d8400a815d71: 2023-07-18 10:14:49,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,762 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=c5ab42bbbc8c13633597d8400a815d71, regionState=CLOSED 2023-07-18 10:14:49,762 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675289762"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675289762"}]},"ts":"1689675289762"} 2023-07-18 10:14:49,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-18 10:14:49,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure c5ab42bbbc8c13633597d8400a815d71, server=jenkins-hbase4.apache.org,35633,1689675275991 in 167 msec 2023-07-18 10:14:49,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-18 10:14:49,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=c5ab42bbbc8c13633597d8400a815d71, UNASSIGN in 179 msec 2023-07-18 10:14:49,775 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675289775"}]},"ts":"1689675289775"} 2023-07-18 10:14:49,778 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-18 10:14:49,779 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-18 10:14:49,782 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 207 msec 2023-07-18 10:14:49,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 10:14:49,885 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-18 10:14:49,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-18 10:14:49,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 10:14:49,889 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 10:14:49,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_740923107' 2023-07-18 10:14:49,890 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 10:14:49,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:49,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_740923107 2023-07-18 10:14:49,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:49,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:49,895 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 10:14:49,897 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/recovered.edits] 2023-07-18 10:14:49,904 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/recovered.edits/7.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71/recovered.edits/7.seqid 2023-07-18 10:14:49,905 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveA/c5ab42bbbc8c13633597d8400a815d71 2023-07-18 10:14:49,905 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 10:14:49,908 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 10:14:49,913 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-18 10:14:49,915 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-18 10:14:49,916 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 10:14:49,916 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-18 10:14:49,916 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675289916"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:49,918 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 10:14:49,918 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c5ab42bbbc8c13633597d8400a815d71, NAME => 'GrouptestMultiTableMoveA,,1689675286770.c5ab42bbbc8c13633597d8400a815d71.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 10:14:49,918 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-18 10:14:49,918 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689675289918"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:49,920 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-18 10:14:49,922 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 10:14:49,923 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 36 msec 2023-07-18 10:14:49,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 10:14:49,998 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-18 10:14:49,999 INFO [Listener at localhost/45689] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-18 10:14:49,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-18 10:14:50,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 10:14:50,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 10:14:50,003 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675290003"}]},"ts":"1689675290003"} 2023-07-18 10:14:50,005 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-18 10:14:50,008 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-18 10:14:50,009 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, UNASSIGN}] 2023-07-18 10:14:50,011 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, UNASSIGN 2023-07-18 10:14:50,012 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=08460148a3f0ee2c4975a15eedae70f2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:50,012 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675290012"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675290012"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675290012"}]},"ts":"1689675290012"} 2023-07-18 10:14:50,013 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 08460148a3f0ee2c4975a15eedae70f2, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:50,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 10:14:50,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:50,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 08460148a3f0ee2c4975a15eedae70f2, disabling compactions & flushes 2023-07-18 10:14:50,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:50,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:50,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. after waiting 0 ms 2023-07-18 10:14:50,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:50,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:50,184 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2. 2023-07-18 10:14:50,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 08460148a3f0ee2c4975a15eedae70f2: 2023-07-18 10:14:50,186 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:50,186 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=08460148a3f0ee2c4975a15eedae70f2, regionState=CLOSED 2023-07-18 10:14:50,186 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689675290186"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675290186"}]},"ts":"1689675290186"} 2023-07-18 10:14:50,193 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-18 10:14:50,193 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 08460148a3f0ee2c4975a15eedae70f2, server=jenkins-hbase4.apache.org,35633,1689675275991 in 178 msec 2023-07-18 10:14:50,194 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-18 10:14:50,195 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=08460148a3f0ee2c4975a15eedae70f2, UNASSIGN in 184 msec 2023-07-18 10:14:50,195 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675290195"}]},"ts":"1689675290195"} 2023-07-18 10:14:50,197 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-18 10:14:50,198 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-18 10:14:50,200 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 200 msec 2023-07-18 10:14:50,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 10:14:50,306 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-18 10:14:50,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-18 10:14:50,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 10:14:50,309 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 10:14:50,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_740923107' 2023-07-18 10:14:50,310 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 10:14:50,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_740923107 2023-07-18 10:14:50,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:50,315 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:50,317 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/recovered.edits] 2023-07-18 10:14:50,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 10:14:50,323 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/recovered.edits/7.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2/recovered.edits/7.seqid 2023-07-18 10:14:50,324 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/GrouptestMultiTableMoveB/08460148a3f0ee2c4975a15eedae70f2 2023-07-18 10:14:50,324 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 10:14:50,329 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 10:14:50,331 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-18 10:14:50,333 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-18 10:14:50,334 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 10:14:50,334 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-18 10:14:50,334 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675290334"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:50,335 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 10:14:50,336 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 08460148a3f0ee2c4975a15eedae70f2, NAME => 'GrouptestMultiTableMoveB,,1689675287920.08460148a3f0ee2c4975a15eedae70f2.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 10:14:50,336 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-18 10:14:50,336 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689675290336"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:50,340 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-18 10:14:50,343 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 10:14:50,344 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 36 msec 2023-07-18 10:14:50,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 10:14:50,421 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-18 10:14:50,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,426 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:50,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:50,426 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:50,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35633] to rsgroup default 2023-07-18 10:14:50,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_740923107 2023-07-18 10:14:50,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:50,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_740923107, current retry=0 2023-07-18 10:14:50,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991] are moved back to Group_testMultiTableMove_740923107 2023-07-18 10:14:50,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_740923107 => default 2023-07-18 10:14:50,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,434 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_740923107 2023-07-18 10:14:50,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 10:14:50,442 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:50,443 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:50,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:50,443 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:50,444 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:50,444 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,445 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:50,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:50,451 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:50,454 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:50,454 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:50,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:50,460 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:50,462 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,462 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,464 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:50,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:50,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676490464, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:50,465 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:50,467 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:50,467 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,467 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,468 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:50,469 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:50,469 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,488 INFO [Listener at localhost/45689] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=508 (was 513), OpenFileDescriptor=789 (was 793), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=458 (was 458), ProcessCount=173 (was 173), AvailableMemoryMB=3563 (was 3770) 2023-07-18 10:14:50,488 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-18 10:14:50,506 INFO [Listener at localhost/45689] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=508, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=458, ProcessCount=173, AvailableMemoryMB=3562 2023-07-18 10:14:50,506 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-18 10:14:50,506 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-18 10:14:50,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,512 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:50,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:50,512 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:50,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:50,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:50,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:50,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:50,521 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:50,522 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:50,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:50,528 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:50,530 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,531 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:50,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:50,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676490532, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:50,533 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:50,535 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:50,535 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,535 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,536 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:50,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:50,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:50,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-18 10:14:50,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 10:14:50,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:50,546 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:50,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup oldGroup 2023-07-18 10:14:50,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 10:14:50,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:50,555 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 10:14:50,555 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048] are moved back to default 2023-07-18 10:14:50,555 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-18 10:14:50,555 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,558 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,558 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,560 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 10:14:50,560 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 10:14:50,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:50,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,563 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-18 10:14:50,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 10:14:50,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 10:14:50,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:14:50,569 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:50,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40931] to rsgroup anotherRSGroup 2023-07-18 10:14:50,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 10:14:50,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 10:14:50,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:14:50,584 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 10:14:50,584 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40931,1689675272348] are moved back to default 2023-07-18 10:14:50,584 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-18 10:14:50,584 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,587 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,587 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,589 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 10:14:50,589 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,590 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 10:14:50,590 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,595 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-18 10:14:50,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:50,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:40186 deadline: 1689676490594, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-18 10:14:50,597 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-18 10:14:50,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:50,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:40186 deadline: 1689676490597, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-18 10:14:50,598 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-18 10:14:50,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:50,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:40186 deadline: 1689676490598, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-18 10:14:50,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-18 10:14:50,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:50,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:40186 deadline: 1689676490598, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-18 10:14:50,602 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,602 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,603 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:50,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:50,603 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:50,604 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40931] to rsgroup default 2023-07-18 10:14:50,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 10:14:50,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 10:14:50,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:14:50,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-18 10:14:50,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40931,1689675272348] are moved back to anotherRSGroup 2023-07-18 10:14:50,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-18 10:14:50,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,611 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-18 10:14:50,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 10:14:50,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 10:14:50,619 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:50,620 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:50,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:50,620 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:50,621 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup default 2023-07-18 10:14:50,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 10:14:50,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:50,626 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-18 10:14:50,626 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048] are moved back to oldGroup 2023-07-18 10:14:50,626 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-18 10:14:50,626 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,627 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-18 10:14:50,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 10:14:50,633 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:50,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:50,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:50,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:50,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:50,635 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,635 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:50,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:50,641 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:50,644 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:50,644 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:50,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:50,649 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 10:14:50,650 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:50,653 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,653 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,655 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:50,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:50,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676490655, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:50,656 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:50,658 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:50,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,659 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:50,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:50,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,694 INFO [Listener at localhost/45689] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=513 (was 508) Potentially hanging thread: hconnection-0x297c531f-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=789 (was 789), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=458 (was 458), ProcessCount=173 (was 173), AvailableMemoryMB=3558 (was 3562) 2023-07-18 10:14:50,694 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-18 10:14:50,717 INFO [Listener at localhost/45689] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=513, OpenFileDescriptor=791, MaxFileDescriptor=60000, SystemLoadAverage=458, ProcessCount=173, AvailableMemoryMB=3558 2023-07-18 10:14:50,717 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-18 10:14:50,718 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-18 10:14:50,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,727 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:50,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:50,727 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:50,728 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:50,728 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,729 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:50,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:50,736 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:50,739 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:50,740 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:50,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:50,747 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:50,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,753 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:50,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:50,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676490753, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:50,754 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:50,755 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:50,756 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,756 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,757 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:50,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:50,758 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,758 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:50,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-18 10:14:50,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 10:14:50,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:50,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:50,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,774 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup oldgroup 2023-07-18 10:14:50,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 10:14:50,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:50,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 10:14:50,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048] are moved back to default 2023-07-18 10:14:50,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-18 10:14:50,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:50,781 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:50,781 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:50,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 10:14:50,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:50,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:50,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-18 10:14:50,787 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:14:50,788 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-18 10:14:50,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 10:14:50,789 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 10:14:50,790 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:50,790 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:50,790 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:50,793 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:14:50,795 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:50,796 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302 empty. 2023-07-18 10:14:50,796 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:50,796 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-18 10:14:50,828 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:50,833 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => c8e2eee4a7112b8e2faf0ec9b8864302, NAME => 'testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:50,848 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:50,848 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing c8e2eee4a7112b8e2faf0ec9b8864302, disabling compactions & flushes 2023-07-18 10:14:50,848 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:50,848 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:50,848 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. after waiting 0 ms 2023-07-18 10:14:50,848 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:50,848 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:50,848 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for c8e2eee4a7112b8e2faf0ec9b8864302: 2023-07-18 10:14:50,853 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:14:50,855 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675290854"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675290854"}]},"ts":"1689675290854"} 2023-07-18 10:14:50,856 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:14:50,857 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:14:50,857 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675290857"}]},"ts":"1689675290857"} 2023-07-18 10:14:50,859 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-18 10:14:50,863 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:50,863 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:50,863 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:50,863 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:50,864 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, ASSIGN}] 2023-07-18 10:14:50,866 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, ASSIGN 2023-07-18 10:14:50,867 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:50,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 10:14:51,017 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:14:51,018 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:51,018 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675291018"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675291018"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675291018"}]},"ts":"1689675291018"} 2023-07-18 10:14:51,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:51,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 10:14:51,176 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8e2eee4a7112b8e2faf0ec9b8864302, NAME => 'testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:51,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:51,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,179 INFO [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,180 DEBUG [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/tr 2023-07-18 10:14:51,180 DEBUG [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/tr 2023-07-18 10:14:51,181 INFO [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8e2eee4a7112b8e2faf0ec9b8864302 columnFamilyName tr 2023-07-18 10:14:51,181 INFO [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] regionserver.HStore(310): Store=c8e2eee4a7112b8e2faf0ec9b8864302/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:51,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:51,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c8e2eee4a7112b8e2faf0ec9b8864302; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9942099200, jitterRate=-0.07406985759735107}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:51,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c8e2eee4a7112b8e2faf0ec9b8864302: 2023-07-18 10:14:51,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302., pid=116, masterSystemTime=1689675291172 2023-07-18 10:14:51,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,195 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:51,196 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675291195"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675291195"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675291195"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675291195"}]},"ts":"1689675291195"} 2023-07-18 10:14:51,199 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-18 10:14:51,199 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,42163,1689675271845 in 177 msec 2023-07-18 10:14:51,200 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-18 10:14:51,200 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, ASSIGN in 335 msec 2023-07-18 10:14:51,202 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:14:51,202 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675291202"}]},"ts":"1689675291202"} 2023-07-18 10:14:51,203 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-18 10:14:51,206 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:14:51,207 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 421 msec 2023-07-18 10:14:51,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 10:14:51,392 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-18 10:14:51,393 DEBUG [Listener at localhost/45689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-18 10:14:51,393 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:51,396 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-18 10:14:51,397 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:51,397 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-18 10:14:51,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-18 10:14:51,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 10:14:51,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:51,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:51,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:51,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-18 10:14:51,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region c8e2eee4a7112b8e2faf0ec9b8864302 to RSGroup oldgroup 2023-07-18 10:14:51,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:51,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:51,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:51,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:14:51,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:51,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, REOPEN/MOVE 2023-07-18 10:14:51,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-18 10:14:51,406 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, REOPEN/MOVE 2023-07-18 10:14:51,407 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:51,407 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675291407"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675291407"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675291407"}]},"ts":"1689675291407"} 2023-07-18 10:14:51,408 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:51,561 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,562 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c8e2eee4a7112b8e2faf0ec9b8864302, disabling compactions & flushes 2023-07-18 10:14:51,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. after waiting 0 ms 2023-07-18 10:14:51,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:51,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c8e2eee4a7112b8e2faf0ec9b8864302: 2023-07-18 10:14:51,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c8e2eee4a7112b8e2faf0ec9b8864302 move to jenkins-hbase4.apache.org,35633,1689675275991 record at close sequenceid=2 2023-07-18 10:14:51,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,572 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=CLOSED 2023-07-18 10:14:51,572 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675291572"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675291572"}]},"ts":"1689675291572"} 2023-07-18 10:14:51,575 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-18 10:14:51,575 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,42163,1689675271845 in 165 msec 2023-07-18 10:14:51,576 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35633,1689675275991; forceNewPlan=false, retain=false 2023-07-18 10:14:51,726 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:14:51,726 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:51,727 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675291726"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675291726"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675291726"}]},"ts":"1689675291726"} 2023-07-18 10:14:51,729 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:51,884 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8e2eee4a7112b8e2faf0ec9b8864302, NAME => 'testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:51,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:51,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,887 INFO [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,888 DEBUG [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/tr 2023-07-18 10:14:51,888 DEBUG [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/tr 2023-07-18 10:14:51,888 INFO [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8e2eee4a7112b8e2faf0ec9b8864302 columnFamilyName tr 2023-07-18 10:14:51,889 INFO [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] regionserver.HStore(310): Store=c8e2eee4a7112b8e2faf0ec9b8864302/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:51,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:51,895 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c8e2eee4a7112b8e2faf0ec9b8864302; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10102443840, jitterRate=-0.05913659930229187}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:51,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c8e2eee4a7112b8e2faf0ec9b8864302: 2023-07-18 10:14:51,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302., pid=119, masterSystemTime=1689675291880 2023-07-18 10:14:51,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:51,898 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:51,898 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675291898"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675291898"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675291898"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675291898"}]},"ts":"1689675291898"} 2023-07-18 10:14:51,901 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-18 10:14:51,901 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,35633,1689675275991 in 171 msec 2023-07-18 10:14:51,902 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, REOPEN/MOVE in 496 msec 2023-07-18 10:14:52,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-18 10:14:52,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-18 10:14:52,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:52,410 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:52,411 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:52,417 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:52,418 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 10:14:52,418 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:52,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 10:14:52,421 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:52,422 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 10:14:52,422 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:52,423 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:52,423 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:52,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-18 10:14:52,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 10:14:52,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 10:14:52,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:52,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:52,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:14:52,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:52,436 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:52,436 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:52,439 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40931] to rsgroup normal 2023-07-18 10:14:52,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 10:14:52,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 10:14:52,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:52,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:52,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:14:52,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 10:14:52,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40931,1689675272348] are moved back to default 2023-07-18 10:14:52,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-18 10:14:52,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:52,454 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:52,454 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:52,459 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 10:14:52,459 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:52,462 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:52,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-18 10:14:52,465 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:14:52,465 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-18 10:14:52,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 10:14:52,467 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 10:14:52,467 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 10:14:52,468 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:52,468 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:52,469 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:14:52,472 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:14:52,480 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:52,482 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086 empty. 2023-07-18 10:14:52,482 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:52,483 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-18 10:14:52,519 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:52,528 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2929c6f81410eb8cdf881f05484b0086, NAME => 'unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:52,557 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:52,557 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 2929c6f81410eb8cdf881f05484b0086, disabling compactions & flushes 2023-07-18 10:14:52,557 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:52,557 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:52,557 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. after waiting 0 ms 2023-07-18 10:14:52,557 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:52,557 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:52,557 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 2929c6f81410eb8cdf881f05484b0086: 2023-07-18 10:14:52,560 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:14:52,562 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675292562"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675292562"}]},"ts":"1689675292562"} 2023-07-18 10:14:52,563 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:14:52,564 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:14:52,565 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675292565"}]},"ts":"1689675292565"} 2023-07-18 10:14:52,566 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-18 10:14:52,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 10:14:52,570 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, ASSIGN}] 2023-07-18 10:14:52,573 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, ASSIGN 2023-07-18 10:14:52,574 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:52,726 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:52,727 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675292726"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675292726"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675292726"}]},"ts":"1689675292726"} 2023-07-18 10:14:52,728 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:52,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 10:14:52,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:52,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2929c6f81410eb8cdf881f05484b0086, NAME => 'unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:52,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:52,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:52,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:52,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:52,888 INFO [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:52,890 DEBUG [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/ut 2023-07-18 10:14:52,890 DEBUG [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/ut 2023-07-18 10:14:52,890 INFO [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2929c6f81410eb8cdf881f05484b0086 columnFamilyName ut 2023-07-18 10:14:52,891 INFO [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] regionserver.HStore(310): Store=2929c6f81410eb8cdf881f05484b0086/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:52,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:52,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:52,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:52,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:52,899 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2929c6f81410eb8cdf881f05484b0086; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11239945440, jitterRate=0.04680149257183075}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:52,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2929c6f81410eb8cdf881f05484b0086: 2023-07-18 10:14:52,899 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086., pid=122, masterSystemTime=1689675292880 2023-07-18 10:14:52,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:52,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:52,902 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:52,902 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675292902"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675292902"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675292902"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675292902"}]},"ts":"1689675292902"} 2023-07-18 10:14:52,908 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-18 10:14:52,908 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,42163,1689675271845 in 178 msec 2023-07-18 10:14:52,910 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-18 10:14:52,910 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, ASSIGN in 338 msec 2023-07-18 10:14:52,912 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:14:52,913 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675292912"}]},"ts":"1689675292912"} 2023-07-18 10:14:52,914 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-18 10:14:52,924 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:14:52,926 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 462 msec 2023-07-18 10:14:53,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 10:14:53,073 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-18 10:14:53,073 DEBUG [Listener at localhost/45689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-18 10:14:53,073 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:53,077 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-18 10:14:53,078 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:53,078 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-18 10:14:53,080 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-18 10:14:53,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 10:14:53,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 10:14:53,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:53,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:53,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:14:53,086 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-18 10:14:53,086 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region 2929c6f81410eb8cdf881f05484b0086 to RSGroup normal 2023-07-18 10:14:53,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, REOPEN/MOVE 2023-07-18 10:14:53,087 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-18 10:14:53,088 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, REOPEN/MOVE 2023-07-18 10:14:53,088 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:53,089 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675293088"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675293088"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675293088"}]},"ts":"1689675293088"} 2023-07-18 10:14:53,090 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:53,252 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:53,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2929c6f81410eb8cdf881f05484b0086, disabling compactions & flushes 2023-07-18 10:14:53,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:53,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:53,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. after waiting 0 ms 2023-07-18 10:14:53,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:53,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:53,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:53,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2929c6f81410eb8cdf881f05484b0086: 2023-07-18 10:14:53,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2929c6f81410eb8cdf881f05484b0086 move to jenkins-hbase4.apache.org,40931,1689675272348 record at close sequenceid=2 2023-07-18 10:14:53,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:53,260 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=CLOSED 2023-07-18 10:14:53,260 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675293260"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675293260"}]},"ts":"1689675293260"} 2023-07-18 10:14:53,263 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-18 10:14:53,263 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,42163,1689675271845 in 171 msec 2023-07-18 10:14:53,264 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40931,1689675272348; forceNewPlan=false, retain=false 2023-07-18 10:14:53,414 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:53,415 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675293414"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675293414"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675293414"}]},"ts":"1689675293414"} 2023-07-18 10:14:53,416 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:53,579 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:53,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2929c6f81410eb8cdf881f05484b0086, NAME => 'unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:53,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:53,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:53,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:53,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:53,582 INFO [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:53,583 DEBUG [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/ut 2023-07-18 10:14:53,583 DEBUG [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/ut 2023-07-18 10:14:53,584 INFO [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2929c6f81410eb8cdf881f05484b0086 columnFamilyName ut 2023-07-18 10:14:53,584 INFO [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] regionserver.HStore(310): Store=2929c6f81410eb8cdf881f05484b0086/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:53,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:53,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:53,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:53,594 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2929c6f81410eb8cdf881f05484b0086; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9800817280, jitterRate=-0.08722776174545288}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:53,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2929c6f81410eb8cdf881f05484b0086: 2023-07-18 10:14:53,594 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086., pid=125, masterSystemTime=1689675293568 2023-07-18 10:14:53,596 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:53,596 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:53,597 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:53,597 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675293596"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675293596"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675293596"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675293596"}]},"ts":"1689675293596"} 2023-07-18 10:14:53,599 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-18 10:14:53,600 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,40931,1689675272348 in 182 msec 2023-07-18 10:14:53,601 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, REOPEN/MOVE in 513 msec 2023-07-18 10:14:54,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-18 10:14:54,088 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-18 10:14:54,088 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:54,092 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:54,092 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:54,095 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:54,096 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 10:14:54,096 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:54,097 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 10:14:54,097 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:54,098 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 10:14:54,098 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:54,099 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-18 10:14:54,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 10:14:54,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:54,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:54,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 10:14:54,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-18 10:14:54,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-18 10:14:54,109 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:54,110 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:54,113 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-18 10:14:54,113 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:54,114 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 10:14:54,114 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:54,115 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 10:14:54,115 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:54,122 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:54,122 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:54,124 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-18 10:14:54,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 10:14:54,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:54,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:54,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 10:14:54,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:14:54,134 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-18 10:14:54,134 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region 2929c6f81410eb8cdf881f05484b0086 to RSGroup default 2023-07-18 10:14:54,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, REOPEN/MOVE 2023-07-18 10:14:54,138 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, REOPEN/MOVE 2023-07-18 10:14:54,139 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 10:14:54,139 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:54,139 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675294139"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675294139"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675294139"}]},"ts":"1689675294139"} 2023-07-18 10:14:54,141 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:54,241 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-18 10:14:54,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:54,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2929c6f81410eb8cdf881f05484b0086, disabling compactions & flushes 2023-07-18 10:14:54,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:54,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:54,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. after waiting 0 ms 2023-07-18 10:14:54,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:54,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:54,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:54,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2929c6f81410eb8cdf881f05484b0086: 2023-07-18 10:14:54,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2929c6f81410eb8cdf881f05484b0086 move to jenkins-hbase4.apache.org,42163,1689675271845 record at close sequenceid=5 2023-07-18 10:14:54,305 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:54,305 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=CLOSED 2023-07-18 10:14:54,305 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675294305"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675294305"}]},"ts":"1689675294305"} 2023-07-18 10:14:54,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-18 10:14:54,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,40931,1689675272348 in 166 msec 2023-07-18 10:14:54,315 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:54,466 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:54,466 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675294466"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675294466"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675294466"}]},"ts":"1689675294466"} 2023-07-18 10:14:54,468 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:54,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:54,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2929c6f81410eb8cdf881f05484b0086, NAME => 'unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:54,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:54,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:54,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:54,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:54,625 INFO [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:54,629 DEBUG [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/ut 2023-07-18 10:14:54,630 DEBUG [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/ut 2023-07-18 10:14:54,630 INFO [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2929c6f81410eb8cdf881f05484b0086 columnFamilyName ut 2023-07-18 10:14:54,631 INFO [StoreOpener-2929c6f81410eb8cdf881f05484b0086-1] regionserver.HStore(310): Store=2929c6f81410eb8cdf881f05484b0086/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:54,632 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:54,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:54,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:54,638 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2929c6f81410eb8cdf881f05484b0086; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11308640320, jitterRate=0.053199201822280884}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:54,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2929c6f81410eb8cdf881f05484b0086: 2023-07-18 10:14:54,638 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086., pid=128, masterSystemTime=1689675294619 2023-07-18 10:14:54,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:54,640 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:54,641 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2929c6f81410eb8cdf881f05484b0086, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:54,641 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689675294641"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675294641"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675294641"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675294641"}]},"ts":"1689675294641"} 2023-07-18 10:14:54,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-18 10:14:54,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 2929c6f81410eb8cdf881f05484b0086, server=jenkins-hbase4.apache.org,42163,1689675271845 in 175 msec 2023-07-18 10:14:54,646 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=2929c6f81410eb8cdf881f05484b0086, REOPEN/MOVE in 511 msec 2023-07-18 10:14:55,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-18 10:14:55,139 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-18 10:14:55,139 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:55,141 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40931] to rsgroup default 2023-07-18 10:14:55,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 10:14:55,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:55,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:55,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 10:14:55,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:14:55,151 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-18 10:14:55,151 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40931,1689675272348] are moved back to normal 2023-07-18 10:14:55,151 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-18 10:14:55,151 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:55,152 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-18 10:14:55,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:55,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:55,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 10:14:55,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 10:14:55,160 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:55,161 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:55,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:55,162 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:55,162 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:55,162 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:55,163 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:55,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:55,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 10:14:55,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 10:14:55,169 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:55,171 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-18 10:14:55,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:55,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 10:14:55,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:55,175 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-18 10:14:55,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(345): Moving region c8e2eee4a7112b8e2faf0ec9b8864302 to RSGroup default 2023-07-18 10:14:55,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, REOPEN/MOVE 2023-07-18 10:14:55,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 10:14:55,177 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, REOPEN/MOVE 2023-07-18 10:14:55,178 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:55,178 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675295178"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675295178"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675295178"}]},"ts":"1689675295178"} 2023-07-18 10:14:55,179 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,35633,1689675275991}] 2023-07-18 10:14:55,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:55,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c8e2eee4a7112b8e2faf0ec9b8864302, disabling compactions & flushes 2023-07-18 10:14:55,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:55,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:55,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. after waiting 0 ms 2023-07-18 10:14:55,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:55,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 10:14:55,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:55,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c8e2eee4a7112b8e2faf0ec9b8864302: 2023-07-18 10:14:55,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c8e2eee4a7112b8e2faf0ec9b8864302 move to jenkins-hbase4.apache.org,40931,1689675272348 record at close sequenceid=5 2023-07-18 10:14:55,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:55,351 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=CLOSED 2023-07-18 10:14:55,351 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675295351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675295351"}]},"ts":"1689675295351"} 2023-07-18 10:14:55,356 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-18 10:14:55,356 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,35633,1689675275991 in 174 msec 2023-07-18 10:14:55,357 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40931,1689675272348; forceNewPlan=false, retain=false 2023-07-18 10:14:55,507 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:14:55,508 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:55,508 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675295508"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675295508"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675295508"}]},"ts":"1689675295508"} 2023-07-18 10:14:55,509 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:55,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:55,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c8e2eee4a7112b8e2faf0ec9b8864302, NAME => 'testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:14:55,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:55,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:55,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:55,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:55,666 INFO [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:55,667 DEBUG [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/tr 2023-07-18 10:14:55,667 DEBUG [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/tr 2023-07-18 10:14:55,668 INFO [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c8e2eee4a7112b8e2faf0ec9b8864302 columnFamilyName tr 2023-07-18 10:14:55,668 INFO [StoreOpener-c8e2eee4a7112b8e2faf0ec9b8864302-1] regionserver.HStore(310): Store=c8e2eee4a7112b8e2faf0ec9b8864302/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:55,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:55,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:55,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:55,673 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c8e2eee4a7112b8e2faf0ec9b8864302; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10099046720, jitterRate=-0.059452980756759644}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:55,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c8e2eee4a7112b8e2faf0ec9b8864302: 2023-07-18 10:14:55,674 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302., pid=131, masterSystemTime=1689675295661 2023-07-18 10:14:55,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:55,676 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:55,676 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=c8e2eee4a7112b8e2faf0ec9b8864302, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:55,676 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689675295676"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675295676"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675295676"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675295676"}]},"ts":"1689675295676"} 2023-07-18 10:14:55,679 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-18 10:14:55,679 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure c8e2eee4a7112b8e2faf0ec9b8864302, server=jenkins-hbase4.apache.org,40931,1689675272348 in 168 msec 2023-07-18 10:14:55,680 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=c8e2eee4a7112b8e2faf0ec9b8864302, REOPEN/MOVE in 503 msec 2023-07-18 10:14:55,687 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 10:14:56,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-18 10:14:56,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-18 10:14:56,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:56,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup default 2023-07-18 10:14:56,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 10:14:56,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:56,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-18 10:14:56,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048] are moved back to newgroup 2023-07-18 10:14:56,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-18 10:14:56,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:56,184 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-18 10:14:56,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:56,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:56,191 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:56,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:56,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:56,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:56,201 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:56,204 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,204 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,206 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:56,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:56,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676496206, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:56,207 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:56,208 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:56,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,209 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:56,210 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:56,210 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:56,227 INFO [Listener at localhost/45689] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=508 (was 513), OpenFileDescriptor=775 (was 791), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 458), ProcessCount=173 (was 173), AvailableMemoryMB=3314 (was 3558) 2023-07-18 10:14:56,227 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-18 10:14:56,247 INFO [Listener at localhost/45689] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=508, OpenFileDescriptor=775, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=173, AvailableMemoryMB=3313 2023-07-18 10:14:56,247 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-18 10:14:56,247 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-18 10:14:56,254 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,254 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,255 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:56,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:56,256 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:56,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:56,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:56,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:56,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:56,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:56,268 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:56,269 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:56,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:56,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:56,275 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:56,280 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,280 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,283 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:56,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:56,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676496283, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:56,284 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:56,286 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:56,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,288 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:56,289 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:56,289 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:56,290 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-18 10:14:56,290 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:56,298 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-18 10:14:56,298 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-18 10:14:56,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-18 10:14:56,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:56,300 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-18 10:14:56,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:56,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:40186 deadline: 1689676496300, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-18 10:14:56,303 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-18 10:14:56,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:56,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:40186 deadline: 1689676496303, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 10:14:56,306 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-18 10:14:56,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-18 10:14:56,313 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-18 10:14:56,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:56,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:40186 deadline: 1689676496312, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 10:14:56,318 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,319 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,319 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:56,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:56,320 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:56,320 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:56,320 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:56,321 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:56,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:56,327 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:56,330 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:56,331 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:56,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:56,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:56,341 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:56,346 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,346 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,349 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:56,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:56,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676496349, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:56,353 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:56,354 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:56,355 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,355 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,356 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:56,357 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:56,357 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:56,375 INFO [Listener at localhost/45689] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=512 (was 508) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-28 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x297c531f-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x48ef79d1-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=775 (was 775), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 426), ProcessCount=173 (was 173), AvailableMemoryMB=3313 (was 3313) 2023-07-18 10:14:56,375 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-18 10:14:56,394 INFO [Listener at localhost/45689] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512, OpenFileDescriptor=775, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=173, AvailableMemoryMB=3312 2023-07-18 10:14:56,395 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-18 10:14:56,395 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-18 10:14:56,400 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,400 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:56,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:56,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:56,402 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:56,402 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:56,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:56,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:56,411 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:56,413 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:56,414 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:56,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:56,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:56,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:56,423 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,423 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:56,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:56,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676496425, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:56,426 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:56,427 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:56,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,428 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:56,429 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:56,429 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:56,430 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:56,430 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:56,430 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_946081103 2023-07-18 10:14:56,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_946081103 2023-07-18 10:14:56,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:56,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:56,440 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:56,443 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,443 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,445 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup Group_testDisabledTableMove_946081103 2023-07-18 10:14:56,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_946081103 2023-07-18 10:14:56,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:56,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:56,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 10:14:56,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048] are moved back to default 2023-07-18 10:14:56,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_946081103 2023-07-18 10:14:56,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:56,452 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:56,453 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:56,455 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_946081103 2023-07-18 10:14:56,455 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:56,457 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:14:56,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-18 10:14:56,460 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:14:56,460 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-18 10:14:56,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 10:14:56,462 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_946081103 2023-07-18 10:14:56,462 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:56,463 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:56,463 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:56,465 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:14:56,469 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:56,469 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:56,469 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:56,469 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:56,469 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:56,470 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3 empty. 2023-07-18 10:14:56,470 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe empty. 2023-07-18 10:14:56,471 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5 empty. 2023-07-18 10:14:56,471 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:56,470 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb empty. 2023-07-18 10:14:56,471 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c empty. 2023-07-18 10:14:56,471 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:56,471 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:56,471 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:56,472 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:56,472 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 10:14:56,501 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-18 10:14:56,503 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8c43e3e22adbffb53a8cdc8c990297c3, NAME => 'Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:56,503 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7d031be796b29b3bfa385fe2708c48cb, NAME => 'Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:56,503 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 24d5395949aa6d77173fe1b70279538c, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:56,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 10:14:56,575 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,575 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 24d5395949aa6d77173fe1b70279538c, disabling compactions & flushes 2023-07-18 10:14:56,575 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:56,575 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:56,575 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. after waiting 0 ms 2023-07-18 10:14:56,575 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:56,575 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:56,575 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 24d5395949aa6d77173fe1b70279538c: 2023-07-18 10:14:56,576 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 66fde4b500d3cedb29703e54ee16e1fe, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:56,576 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,576 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 8c43e3e22adbffb53a8cdc8c990297c3, disabling compactions & flushes 2023-07-18 10:14:56,576 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:56,576 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:56,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. after waiting 0 ms 2023-07-18 10:14:56,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:56,577 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:56,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 8c43e3e22adbffb53a8cdc8c990297c3: 2023-07-18 10:14:56,577 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8f7d8df4b01e7ce2207c8e41eb497ce5, NAME => 'Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp 2023-07-18 10:14:56,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 7d031be796b29b3bfa385fe2708c48cb, disabling compactions & flushes 2023-07-18 10:14:56,577 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:56,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:56,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. after waiting 0 ms 2023-07-18 10:14:56,577 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:56,578 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:56,578 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 7d031be796b29b3bfa385fe2708c48cb: 2023-07-18 10:14:56,593 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,593 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 8f7d8df4b01e7ce2207c8e41eb497ce5, disabling compactions & flushes 2023-07-18 10:14:56,593 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:56,594 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:56,594 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. after waiting 0 ms 2023-07-18 10:14:56,594 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:56,594 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:56,594 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 8f7d8df4b01e7ce2207c8e41eb497ce5: 2023-07-18 10:14:56,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 66fde4b500d3cedb29703e54ee16e1fe, disabling compactions & flushes 2023-07-18 10:14:56,595 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:56,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:56,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. after waiting 0 ms 2023-07-18 10:14:56,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:56,595 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:56,596 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 66fde4b500d3cedb29703e54ee16e1fe: 2023-07-18 10:14:56,598 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:14:56,599 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675296599"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675296599"}]},"ts":"1689675296599"} 2023-07-18 10:14:56,599 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675296599"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675296599"}]},"ts":"1689675296599"} 2023-07-18 10:14:56,599 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675296599"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675296599"}]},"ts":"1689675296599"} 2023-07-18 10:14:56,599 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675296599"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675296599"}]},"ts":"1689675296599"} 2023-07-18 10:14:56,599 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675296599"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675296599"}]},"ts":"1689675296599"} 2023-07-18 10:14:56,602 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 10:14:56,602 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:14:56,603 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675296602"}]},"ts":"1689675296602"} 2023-07-18 10:14:56,604 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-18 10:14:56,608 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:14:56,608 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:14:56,608 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:14:56,608 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:14:56,609 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8c43e3e22adbffb53a8cdc8c990297c3, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7d031be796b29b3bfa385fe2708c48cb, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=24d5395949aa6d77173fe1b70279538c, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=66fde4b500d3cedb29703e54ee16e1fe, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8f7d8df4b01e7ce2207c8e41eb497ce5, ASSIGN}] 2023-07-18 10:14:56,611 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8c43e3e22adbffb53a8cdc8c990297c3, ASSIGN 2023-07-18 10:14:56,611 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=24d5395949aa6d77173fe1b70279538c, ASSIGN 2023-07-18 10:14:56,611 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7d031be796b29b3bfa385fe2708c48cb, ASSIGN 2023-07-18 10:14:56,611 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=66fde4b500d3cedb29703e54ee16e1fe, ASSIGN 2023-07-18 10:14:56,612 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8c43e3e22adbffb53a8cdc8c990297c3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40931,1689675272348; forceNewPlan=false, retain=false 2023-07-18 10:14:56,612 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=24d5395949aa6d77173fe1b70279538c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:56,612 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7d031be796b29b3bfa385fe2708c48cb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:56,612 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=66fde4b500d3cedb29703e54ee16e1fe, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689675271845; forceNewPlan=false, retain=false 2023-07-18 10:14:56,613 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8f7d8df4b01e7ce2207c8e41eb497ce5, ASSIGN 2023-07-18 10:14:56,613 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8f7d8df4b01e7ce2207c8e41eb497ce5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40931,1689675272348; forceNewPlan=false, retain=false 2023-07-18 10:14:56,763 INFO [jenkins-hbase4:42907] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 10:14:56,767 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=7d031be796b29b3bfa385fe2708c48cb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:56,767 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=8c43e3e22adbffb53a8cdc8c990297c3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:56,767 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675296767"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675296767"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675296767"}]},"ts":"1689675296767"} 2023-07-18 10:14:56,767 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=8f7d8df4b01e7ce2207c8e41eb497ce5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:56,767 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=24d5395949aa6d77173fe1b70279538c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:56,768 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675296767"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675296767"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675296767"}]},"ts":"1689675296767"} 2023-07-18 10:14:56,768 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675296767"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675296767"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675296767"}]},"ts":"1689675296767"} 2023-07-18 10:14:56,767 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=66fde4b500d3cedb29703e54ee16e1fe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:56,767 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675296767"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675296767"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675296767"}]},"ts":"1689675296767"} 2023-07-18 10:14:56,768 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675296767"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675296767"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675296767"}]},"ts":"1689675296767"} 2023-07-18 10:14:56,769 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=134, state=RUNNABLE; OpenRegionProcedure 7d031be796b29b3bfa385fe2708c48cb, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:56,771 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=137, state=RUNNABLE; OpenRegionProcedure 8f7d8df4b01e7ce2207c8e41eb497ce5, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:56,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=135, state=RUNNABLE; OpenRegionProcedure 24d5395949aa6d77173fe1b70279538c, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:56,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 10:14:56,775 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=133, state=RUNNABLE; OpenRegionProcedure 8c43e3e22adbffb53a8cdc8c990297c3, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:56,775 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=136, state=RUNNABLE; OpenRegionProcedure 66fde4b500d3cedb29703e54ee16e1fe, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:56,928 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:56,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 66fde4b500d3cedb29703e54ee16e1fe, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 10:14:56,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:56,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:56,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:56,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:56,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8c43e3e22adbffb53a8cdc8c990297c3, NAME => 'Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 10:14:56,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:56,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:56,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:56,934 INFO [StoreOpener-66fde4b500d3cedb29703e54ee16e1fe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:56,934 INFO [StoreOpener-8c43e3e22adbffb53a8cdc8c990297c3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:56,936 DEBUG [StoreOpener-66fde4b500d3cedb29703e54ee16e1fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe/f 2023-07-18 10:14:56,936 DEBUG [StoreOpener-8c43e3e22adbffb53a8cdc8c990297c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3/f 2023-07-18 10:14:56,936 DEBUG [StoreOpener-66fde4b500d3cedb29703e54ee16e1fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe/f 2023-07-18 10:14:56,936 DEBUG [StoreOpener-8c43e3e22adbffb53a8cdc8c990297c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3/f 2023-07-18 10:14:56,937 INFO [StoreOpener-66fde4b500d3cedb29703e54ee16e1fe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 66fde4b500d3cedb29703e54ee16e1fe columnFamilyName f 2023-07-18 10:14:56,937 INFO [StoreOpener-8c43e3e22adbffb53a8cdc8c990297c3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8c43e3e22adbffb53a8cdc8c990297c3 columnFamilyName f 2023-07-18 10:14:56,937 INFO [StoreOpener-66fde4b500d3cedb29703e54ee16e1fe-1] regionserver.HStore(310): Store=66fde4b500d3cedb29703e54ee16e1fe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:56,938 INFO [StoreOpener-8c43e3e22adbffb53a8cdc8c990297c3-1] regionserver.HStore(310): Store=8c43e3e22adbffb53a8cdc8c990297c3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:56,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:56,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:56,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:56,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:56,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:56,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:56,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:56,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 66fde4b500d3cedb29703e54ee16e1fe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10189348000, jitterRate=-0.05104301869869232}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:56,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 66fde4b500d3cedb29703e54ee16e1fe: 2023-07-18 10:14:56,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe., pid=142, masterSystemTime=1689675296922 2023-07-18 10:14:56,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:56,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:56,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:56,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 24d5395949aa6d77173fe1b70279538c, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 10:14:56,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:56,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,950 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=66fde4b500d3cedb29703e54ee16e1fe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:56,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:56,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:56,950 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675296949"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675296949"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675296949"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675296949"}]},"ts":"1689675296949"} 2023-07-18 10:14:56,951 INFO [StoreOpener-24d5395949aa6d77173fe1b70279538c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:56,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:56,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8c43e3e22adbffb53a8cdc8c990297c3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11855095200, jitterRate=0.10409177839756012}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:56,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8c43e3e22adbffb53a8cdc8c990297c3: 2023-07-18 10:14:56,957 DEBUG [StoreOpener-24d5395949aa6d77173fe1b70279538c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c/f 2023-07-18 10:14:56,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3., pid=141, masterSystemTime=1689675296927 2023-07-18 10:14:56,957 DEBUG [StoreOpener-24d5395949aa6d77173fe1b70279538c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c/f 2023-07-18 10:14:56,958 INFO [StoreOpener-24d5395949aa6d77173fe1b70279538c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 24d5395949aa6d77173fe1b70279538c columnFamilyName f 2023-07-18 10:14:56,960 INFO [StoreOpener-24d5395949aa6d77173fe1b70279538c-1] regionserver.HStore(310): Store=24d5395949aa6d77173fe1b70279538c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:56,960 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=136 2023-07-18 10:14:56,960 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=136, state=SUCCESS; OpenRegionProcedure 66fde4b500d3cedb29703e54ee16e1fe, server=jenkins-hbase4.apache.org,42163,1689675271845 in 181 msec 2023-07-18 10:14:56,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:56,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:56,961 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=8c43e3e22adbffb53a8cdc8c990297c3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:56,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:56,961 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675296961"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675296961"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675296961"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675296961"}]},"ts":"1689675296961"} 2023-07-18 10:14:56,962 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=66fde4b500d3cedb29703e54ee16e1fe, ASSIGN in 352 msec 2023-07-18 10:14:56,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:56,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:56,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f7d8df4b01e7ce2207c8e41eb497ce5, NAME => 'Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 10:14:56,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:56,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:56,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:56,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=133 2023-07-18 10:14:56,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=133, state=SUCCESS; OpenRegionProcedure 8c43e3e22adbffb53a8cdc8c990297c3, server=jenkins-hbase4.apache.org,40931,1689675272348 in 188 msec 2023-07-18 10:14:56,965 INFO [StoreOpener-8f7d8df4b01e7ce2207c8e41eb497ce5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:56,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:56,968 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8c43e3e22adbffb53a8cdc8c990297c3, ASSIGN in 357 msec 2023-07-18 10:14:56,968 DEBUG [StoreOpener-8f7d8df4b01e7ce2207c8e41eb497ce5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5/f 2023-07-18 10:14:56,968 DEBUG [StoreOpener-8f7d8df4b01e7ce2207c8e41eb497ce5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5/f 2023-07-18 10:14:56,969 INFO [StoreOpener-8f7d8df4b01e7ce2207c8e41eb497ce5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f7d8df4b01e7ce2207c8e41eb497ce5 columnFamilyName f 2023-07-18 10:14:56,969 INFO [StoreOpener-8f7d8df4b01e7ce2207c8e41eb497ce5-1] regionserver.HStore(310): Store=8f7d8df4b01e7ce2207c8e41eb497ce5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:56,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:56,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:56,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:56,972 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 24d5395949aa6d77173fe1b70279538c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9818406240, jitterRate=-0.08558966219425201}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:56,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 24d5395949aa6d77173fe1b70279538c: 2023-07-18 10:14:56,972 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c., pid=140, masterSystemTime=1689675296922 2023-07-18 10:14:56,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:56,974 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:56,974 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:56,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7d031be796b29b3bfa385fe2708c48cb, NAME => 'Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 10:14:56,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:56,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:56,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:14:56,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:56,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:56,975 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=24d5395949aa6d77173fe1b70279538c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:56,975 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675296975"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675296975"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675296975"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675296975"}]},"ts":"1689675296975"} 2023-07-18 10:14:56,978 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-18 10:14:56,978 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; OpenRegionProcedure 24d5395949aa6d77173fe1b70279538c, server=jenkins-hbase4.apache.org,42163,1689675271845 in 204 msec 2023-07-18 10:14:56,979 INFO [StoreOpener-7d031be796b29b3bfa385fe2708c48cb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:56,979 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=24d5395949aa6d77173fe1b70279538c, ASSIGN in 370 msec 2023-07-18 10:14:56,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:56,980 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8f7d8df4b01e7ce2207c8e41eb497ce5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11303763040, jitterRate=0.052744969725608826}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:56,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8f7d8df4b01e7ce2207c8e41eb497ce5: 2023-07-18 10:14:56,980 DEBUG [StoreOpener-7d031be796b29b3bfa385fe2708c48cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb/f 2023-07-18 10:14:56,980 DEBUG [StoreOpener-7d031be796b29b3bfa385fe2708c48cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb/f 2023-07-18 10:14:56,981 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5., pid=139, masterSystemTime=1689675296927 2023-07-18 10:14:56,981 INFO [StoreOpener-7d031be796b29b3bfa385fe2708c48cb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7d031be796b29b3bfa385fe2708c48cb columnFamilyName f 2023-07-18 10:14:56,981 INFO [StoreOpener-7d031be796b29b3bfa385fe2708c48cb-1] regionserver.HStore(310): Store=7d031be796b29b3bfa385fe2708c48cb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:14:56,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:56,982 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:56,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:56,982 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=8f7d8df4b01e7ce2207c8e41eb497ce5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:56,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:56,982 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675296982"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675296982"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675296982"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675296982"}]},"ts":"1689675296982"} 2023-07-18 10:14:56,986 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=137 2023-07-18 10:14:56,986 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; OpenRegionProcedure 8f7d8df4b01e7ce2207c8e41eb497ce5, server=jenkins-hbase4.apache.org,40931,1689675272348 in 213 msec 2023-07-18 10:14:56,987 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8f7d8df4b01e7ce2207c8e41eb497ce5, ASSIGN in 378 msec 2023-07-18 10:14:56,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:56,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:14:56,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7d031be796b29b3bfa385fe2708c48cb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10984159840, jitterRate=0.022979602217674255}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:14:56,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7d031be796b29b3bfa385fe2708c48cb: 2023-07-18 10:14:56,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb., pid=138, masterSystemTime=1689675296922 2023-07-18 10:14:56,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:56,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:56,993 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=7d031be796b29b3bfa385fe2708c48cb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:56,993 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675296993"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675296993"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675296993"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675296993"}]},"ts":"1689675296993"} 2023-07-18 10:14:56,995 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=134 2023-07-18 10:14:56,996 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=134, state=SUCCESS; OpenRegionProcedure 7d031be796b29b3bfa385fe2708c48cb, server=jenkins-hbase4.apache.org,42163,1689675271845 in 225 msec 2023-07-18 10:14:56,997 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-18 10:14:56,997 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7d031be796b29b3bfa385fe2708c48cb, ASSIGN in 388 msec 2023-07-18 10:14:56,997 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:14:56,998 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675296998"}]},"ts":"1689675296998"} 2023-07-18 10:14:56,999 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-18 10:14:57,001 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:14:57,002 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 544 msec 2023-07-18 10:14:57,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 10:14:57,076 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-18 10:14:57,076 DEBUG [Listener at localhost/45689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-18 10:14:57,076 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:57,080 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-18 10:14:57,080 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:57,080 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-18 10:14:57,081 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:57,087 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 10:14:57,087 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:57,088 INFO [Listener at localhost/45689] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 10:14:57,088 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 10:14:57,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-18 10:14:57,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-18 10:14:57,092 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675297092"}]},"ts":"1689675297092"} 2023-07-18 10:14:57,093 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-18 10:14:57,095 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-18 10:14:57,096 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8c43e3e22adbffb53a8cdc8c990297c3, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7d031be796b29b3bfa385fe2708c48cb, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=24d5395949aa6d77173fe1b70279538c, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=66fde4b500d3cedb29703e54ee16e1fe, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8f7d8df4b01e7ce2207c8e41eb497ce5, UNASSIGN}] 2023-07-18 10:14:57,099 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=66fde4b500d3cedb29703e54ee16e1fe, UNASSIGN 2023-07-18 10:14:57,100 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7d031be796b29b3bfa385fe2708c48cb, UNASSIGN 2023-07-18 10:14:57,100 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8f7d8df4b01e7ce2207c8e41eb497ce5, UNASSIGN 2023-07-18 10:14:57,100 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=24d5395949aa6d77173fe1b70279538c, UNASSIGN 2023-07-18 10:14:57,100 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8c43e3e22adbffb53a8cdc8c990297c3, UNASSIGN 2023-07-18 10:14:57,100 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=66fde4b500d3cedb29703e54ee16e1fe, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:57,101 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675297100"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675297100"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675297100"}]},"ts":"1689675297100"} 2023-07-18 10:14:57,101 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=8f7d8df4b01e7ce2207c8e41eb497ce5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:57,101 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=8c43e3e22adbffb53a8cdc8c990297c3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:57,101 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=7d031be796b29b3bfa385fe2708c48cb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:57,101 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675297101"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675297101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675297101"}]},"ts":"1689675297101"} 2023-07-18 10:14:57,101 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675297101"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675297101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675297101"}]},"ts":"1689675297101"} 2023-07-18 10:14:57,101 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675297101"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675297101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675297101"}]},"ts":"1689675297101"} 2023-07-18 10:14:57,101 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=24d5395949aa6d77173fe1b70279538c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:57,102 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675297101"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675297101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675297101"}]},"ts":"1689675297101"} 2023-07-18 10:14:57,102 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=147, state=RUNNABLE; CloseRegionProcedure 66fde4b500d3cedb29703e54ee16e1fe, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:57,103 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=144, state=RUNNABLE; CloseRegionProcedure 8c43e3e22adbffb53a8cdc8c990297c3, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:57,103 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=145, state=RUNNABLE; CloseRegionProcedure 7d031be796b29b3bfa385fe2708c48cb, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:57,104 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=148, state=RUNNABLE; CloseRegionProcedure 8f7d8df4b01e7ce2207c8e41eb497ce5, server=jenkins-hbase4.apache.org,40931,1689675272348}] 2023-07-18 10:14:57,105 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=146, state=RUNNABLE; CloseRegionProcedure 24d5395949aa6d77173fe1b70279538c, server=jenkins-hbase4.apache.org,42163,1689675271845}] 2023-07-18 10:14:57,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-18 10:14:57,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:57,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7d031be796b29b3bfa385fe2708c48cb, disabling compactions & flushes 2023-07-18 10:14:57,255 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:57,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:57,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. after waiting 0 ms 2023-07-18 10:14:57,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:57,255 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:57,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8c43e3e22adbffb53a8cdc8c990297c3, disabling compactions & flushes 2023-07-18 10:14:57,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:57,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:57,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. after waiting 0 ms 2023-07-18 10:14:57,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:57,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:57,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:57,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb. 2023-07-18 10:14:57,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7d031be796b29b3bfa385fe2708c48cb: 2023-07-18 10:14:57,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3. 2023-07-18 10:14:57,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8c43e3e22adbffb53a8cdc8c990297c3: 2023-07-18 10:14:57,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:57,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:57,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 24d5395949aa6d77173fe1b70279538c, disabling compactions & flushes 2023-07-18 10:14:57,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:57,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:57,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. after waiting 0 ms 2023-07-18 10:14:57,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:57,263 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=7d031be796b29b3bfa385fe2708c48cb, regionState=CLOSED 2023-07-18 10:14:57,263 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675297262"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675297262"}]},"ts":"1689675297262"} 2023-07-18 10:14:57,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:57,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:57,263 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=8c43e3e22adbffb53a8cdc8c990297c3, regionState=CLOSED 2023-07-18 10:14:57,264 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675297263"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675297263"}]},"ts":"1689675297263"} 2023-07-18 10:14:57,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8f7d8df4b01e7ce2207c8e41eb497ce5, disabling compactions & flushes 2023-07-18 10:14:57,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:57,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:57,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. after waiting 0 ms 2023-07-18 10:14:57,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:57,267 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=145 2023-07-18 10:14:57,267 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=145, state=SUCCESS; CloseRegionProcedure 7d031be796b29b3bfa385fe2708c48cb, server=jenkins-hbase4.apache.org,42163,1689675271845 in 162 msec 2023-07-18 10:14:57,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:57,268 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=144 2023-07-18 10:14:57,268 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=144, state=SUCCESS; CloseRegionProcedure 8c43e3e22adbffb53a8cdc8c990297c3, server=jenkins-hbase4.apache.org,40931,1689675272348 in 163 msec 2023-07-18 10:14:57,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c. 2023-07-18 10:14:57,268 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7d031be796b29b3bfa385fe2708c48cb, UNASSIGN in 171 msec 2023-07-18 10:14:57,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 24d5395949aa6d77173fe1b70279538c: 2023-07-18 10:14:57,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:57,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5. 2023-07-18 10:14:57,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8f7d8df4b01e7ce2207c8e41eb497ce5: 2023-07-18 10:14:57,271 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8c43e3e22adbffb53a8cdc8c990297c3, UNASSIGN in 172 msec 2023-07-18 10:14:57,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:57,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:57,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 66fde4b500d3cedb29703e54ee16e1fe, disabling compactions & flushes 2023-07-18 10:14:57,272 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:57,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:57,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. after waiting 0 ms 2023-07-18 10:14:57,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:57,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:14:57,275 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=24d5395949aa6d77173fe1b70279538c, regionState=CLOSED 2023-07-18 10:14:57,275 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675297275"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675297275"}]},"ts":"1689675297275"} 2023-07-18 10:14:57,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:57,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe. 2023-07-18 10:14:57,276 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 66fde4b500d3cedb29703e54ee16e1fe: 2023-07-18 10:14:57,276 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=8f7d8df4b01e7ce2207c8e41eb497ce5, regionState=CLOSED 2023-07-18 10:14:57,276 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689675297276"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675297276"}]},"ts":"1689675297276"} 2023-07-18 10:14:57,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:57,278 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=66fde4b500d3cedb29703e54ee16e1fe, regionState=CLOSED 2023-07-18 10:14:57,278 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689675297278"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675297278"}]},"ts":"1689675297278"} 2023-07-18 10:14:57,285 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=146 2023-07-18 10:14:57,285 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=148 2023-07-18 10:14:57,285 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=146, state=SUCCESS; CloseRegionProcedure 24d5395949aa6d77173fe1b70279538c, server=jenkins-hbase4.apache.org,42163,1689675271845 in 173 msec 2023-07-18 10:14:57,285 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=148, state=SUCCESS; CloseRegionProcedure 8f7d8df4b01e7ce2207c8e41eb497ce5, server=jenkins-hbase4.apache.org,40931,1689675272348 in 173 msec 2023-07-18 10:14:57,286 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8f7d8df4b01e7ce2207c8e41eb497ce5, UNASSIGN in 189 msec 2023-07-18 10:14:57,286 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=24d5395949aa6d77173fe1b70279538c, UNASSIGN in 189 msec 2023-07-18 10:14:57,287 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=147 2023-07-18 10:14:57,287 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=147, state=SUCCESS; CloseRegionProcedure 66fde4b500d3cedb29703e54ee16e1fe, server=jenkins-hbase4.apache.org,42163,1689675271845 in 183 msec 2023-07-18 10:14:57,289 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=143 2023-07-18 10:14:57,289 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=66fde4b500d3cedb29703e54ee16e1fe, UNASSIGN in 191 msec 2023-07-18 10:14:57,290 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675297290"}]},"ts":"1689675297290"} 2023-07-18 10:14:57,291 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-18 10:14:57,296 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-18 10:14:57,300 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 211 msec 2023-07-18 10:14:57,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-18 10:14:57,394 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-18 10:14:57,394 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_946081103 2023-07-18 10:14:57,396 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_946081103 2023-07-18 10:14:57,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_946081103 2023-07-18 10:14:57,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:57,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:57,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:57,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-18 10:14:57,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_946081103, current retry=0 2023-07-18 10:14:57,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_946081103. 2023-07-18 10:14:57,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:57,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:57,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:57,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 10:14:57,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:14:57,407 INFO [Listener at localhost/45689] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 10:14:57,408 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 10:14:57,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:57,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:40186 deadline: 1689675357408, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-18 10:14:57,409 DEBUG [Listener at localhost/45689] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-18 10:14:57,409 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-18 10:14:57,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 10:14:57,412 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 10:14:57,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_946081103' 2023-07-18 10:14:57,413 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 10:14:57,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_946081103 2023-07-18 10:14:57,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:57,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:57,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:57,419 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:57,419 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:57,419 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:57,419 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:57,419 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:57,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-18 10:14:57,422 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c/recovered.edits] 2023-07-18 10:14:57,422 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb/recovered.edits] 2023-07-18 10:14:57,422 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe/recovered.edits] 2023-07-18 10:14:57,423 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5/recovered.edits] 2023-07-18 10:14:57,425 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3/f, FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3/recovered.edits] 2023-07-18 10:14:57,430 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb/recovered.edits/4.seqid 2023-07-18 10:14:57,431 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c/recovered.edits/4.seqid 2023-07-18 10:14:57,431 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/7d031be796b29b3bfa385fe2708c48cb 2023-07-18 10:14:57,432 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/24d5395949aa6d77173fe1b70279538c 2023-07-18 10:14:57,432 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5/recovered.edits/4.seqid 2023-07-18 10:14:57,432 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe/recovered.edits/4.seqid 2023-07-18 10:14:57,432 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8f7d8df4b01e7ce2207c8e41eb497ce5 2023-07-18 10:14:57,433 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3/recovered.edits/4.seqid to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/archive/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3/recovered.edits/4.seqid 2023-07-18 10:14:57,433 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/66fde4b500d3cedb29703e54ee16e1fe 2023-07-18 10:14:57,433 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/.tmp/data/default/Group_testDisabledTableMove/8c43e3e22adbffb53a8cdc8c990297c3 2023-07-18 10:14:57,434 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 10:14:57,436 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 10:14:57,438 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-18 10:14:57,440 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-18 10:14:57,441 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 10:14:57,441 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-18 10:14:57,441 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675297441"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:57,441 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675297441"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:57,441 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675297441"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:57,441 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675297441"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:57,441 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675297441"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:57,443 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 10:14:57,443 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8c43e3e22adbffb53a8cdc8c990297c3, NAME => 'Group_testDisabledTableMove,,1689675296456.8c43e3e22adbffb53a8cdc8c990297c3.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 7d031be796b29b3bfa385fe2708c48cb, NAME => 'Group_testDisabledTableMove,aaaaa,1689675296456.7d031be796b29b3bfa385fe2708c48cb.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 24d5395949aa6d77173fe1b70279538c, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689675296456.24d5395949aa6d77173fe1b70279538c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 66fde4b500d3cedb29703e54ee16e1fe, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689675296456.66fde4b500d3cedb29703e54ee16e1fe.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 8f7d8df4b01e7ce2207c8e41eb497ce5, NAME => 'Group_testDisabledTableMove,zzzzz,1689675296456.8f7d8df4b01e7ce2207c8e41eb497ce5.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 10:14:57,443 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-18 10:14:57,443 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689675297443"}]},"ts":"9223372036854775807"} 2023-07-18 10:14:57,444 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-18 10:14:57,447 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 10:14:57,448 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 38 msec 2023-07-18 10:14:57,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-18 10:14:57,522 INFO [Listener at localhost/45689] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-18 10:14:57,526 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:57,526 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:57,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:57,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:57,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:57,528 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:35633] to rsgroup default 2023-07-18 10:14:57,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_946081103 2023-07-18 10:14:57,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:57,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:57,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:14:57,533 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_946081103, current retry=0 2023-07-18 10:14:57,533 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35633,1689675275991, jenkins-hbase4.apache.org,40033,1689675272048] are moved back to Group_testDisabledTableMove_946081103 2023-07-18 10:14:57,533 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_946081103 => default 2023-07-18 10:14:57,534 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:57,534 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_946081103 2023-07-18 10:14:57,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:57,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:57,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 10:14:57,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:57,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:57,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:57,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:57,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:57,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:57,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:57,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:57,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:57,546 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:57,548 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:57,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:57,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:57,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:57,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:57,554 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:57,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:57,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:57,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:57,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:57,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676497557, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:57,558 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:57,559 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:57,560 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:57,560 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:57,560 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:57,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:57,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:57,578 INFO [Listener at localhost/45689] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=513 (was 512) Potentially hanging thread: hconnection-0x297c531f-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5f7045aa-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1073179997_17 at /127.0.0.1:48732 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-473618993_17 at /127.0.0.1:52488 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=801 (was 775) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 426), ProcessCount=173 (was 173), AvailableMemoryMB=3255 (was 3312) 2023-07-18 10:14:57,578 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-18 10:14:57,594 INFO [Listener at localhost/45689] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=513, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=173, AvailableMemoryMB=3254 2023-07-18 10:14:57,594 WARN [Listener at localhost/45689] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-18 10:14:57,594 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-18 10:14:57,598 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:57,598 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:57,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:14:57,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:14:57,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:14:57,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:14:57,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:14:57,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:14:57,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:57,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:14:57,605 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:14:57,608 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:14:57,608 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:14:57,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:14:57,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:14:57,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:14:57,617 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:14:57,619 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:57,619 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:57,621 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42907] to rsgroup master 2023-07-18 10:14:57,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:14:57,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40186 deadline: 1689676497621, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. 2023-07-18 10:14:57,622 WARN [Listener at localhost/45689] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42907 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:14:57,623 INFO [Listener at localhost/45689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:14:57,624 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:14:57,624 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:14:57,624 INFO [Listener at localhost/45689] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35633, jenkins-hbase4.apache.org:40033, jenkins-hbase4.apache.org:40931, jenkins-hbase4.apache.org:42163], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:14:57,625 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:14:57,625 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42907] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:14:57,625 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 10:14:57,626 INFO [Listener at localhost/45689] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 10:14:57,626 DEBUG [Listener at localhost/45689] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0064b392 to 127.0.0.1:53154 2023-07-18 10:14:57,626 DEBUG [Listener at localhost/45689] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:57,627 DEBUG [Listener at localhost/45689] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 10:14:57,627 DEBUG [Listener at localhost/45689] util.JVMClusterUtil(257): Found active master hash=397760860, stopped=false 2023-07-18 10:14:57,627 DEBUG [Listener at localhost/45689] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 10:14:57,627 DEBUG [Listener at localhost/45689] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 10:14:57,627 INFO [Listener at localhost/45689] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:57,629 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:14:57,629 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:14:57,629 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:14:57,629 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:14:57,629 INFO [Listener at localhost/45689] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 10:14:57,629 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:57,629 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:14:57,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:14:57,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:14:57,630 DEBUG [Listener at localhost/45689] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2ffb11a2 to 127.0.0.1:53154 2023-07-18 10:14:57,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:14:57,630 DEBUG [Listener at localhost/45689] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:57,630 INFO [Listener at localhost/45689] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42163,1689675271845' ***** 2023-07-18 10:14:57,630 INFO [Listener at localhost/45689] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:14:57,631 INFO [Listener at localhost/45689] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40033,1689675272048' ***** 2023-07-18 10:14:57,631 INFO [Listener at localhost/45689] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:14:57,631 INFO [Listener at localhost/45689] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40931,1689675272348' ***** 2023-07-18 10:14:57,631 INFO [Listener at localhost/45689] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:14:57,631 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:14:57,630 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:14:57,631 INFO [Listener at localhost/45689] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35633,1689675275991' ***** 2023-07-18 10:14:57,631 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:14:57,631 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:14:57,633 INFO [Listener at localhost/45689] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:14:57,634 INFO [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:14:57,639 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:14:57,655 INFO [RS:3;jenkins-hbase4:35633] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3576228c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:14:57,656 INFO [RS:0;jenkins-hbase4:42163] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4741679e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:14:57,657 INFO [RS:2;jenkins-hbase4:40931] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6bae5329{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:14:57,660 INFO [RS:1;jenkins-hbase4:40033] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@296c8231{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:14:57,661 INFO [RS:1;jenkins-hbase4:40033] server.AbstractConnector(383): Stopped ServerConnector@5b9ad7c5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:14:57,661 INFO [RS:2;jenkins-hbase4:40931] server.AbstractConnector(383): Stopped ServerConnector@341f201f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:14:57,661 INFO [RS:1;jenkins-hbase4:40033] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:14:57,661 INFO [RS:2;jenkins-hbase4:40931] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:14:57,661 INFO [RS:3;jenkins-hbase4:35633] server.AbstractConnector(383): Stopped ServerConnector@191c4c74{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:14:57,661 INFO [RS:3;jenkins-hbase4:35633] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:14:57,662 INFO [RS:0;jenkins-hbase4:42163] server.AbstractConnector(383): Stopped ServerConnector@1eb5cca9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:14:57,662 INFO [RS:0;jenkins-hbase4:42163] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:14:57,667 INFO [RS:3;jenkins-hbase4:35633] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b8a7a95{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:14:57,667 INFO [RS:2;jenkins-hbase4:40931] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4025456f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:14:57,668 INFO [RS:3;jenkins-hbase4:35633] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1621780c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,STOPPED} 2023-07-18 10:14:57,667 INFO [RS:0;jenkins-hbase4:42163] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@59ec68ba{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:14:57,667 INFO [RS:1;jenkins-hbase4:40033] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2a84da01{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:14:57,669 INFO [RS:2;jenkins-hbase4:40931] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3c064514{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,STOPPED} 2023-07-18 10:14:57,671 INFO [RS:0;jenkins-hbase4:42163] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f5b6c1a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,STOPPED} 2023-07-18 10:14:57,672 INFO [RS:1;jenkins-hbase4:40033] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7668a9a6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,STOPPED} 2023-07-18 10:14:57,674 INFO [RS:0;jenkins-hbase4:42163] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:14:57,674 INFO [RS:2;jenkins-hbase4:40931] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:14:57,675 INFO [RS:1;jenkins-hbase4:40033] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:14:57,675 INFO [RS:0;jenkins-hbase4:42163] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:14:57,675 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:14:57,675 INFO [RS:0;jenkins-hbase4:42163] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:14:57,675 INFO [RS:1;jenkins-hbase4:40033] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:14:57,675 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:14:57,675 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(3305): Received CLOSE for 6fb842bd011abbe63e3755e261be5bdf 2023-07-18 10:14:57,675 INFO [RS:2;jenkins-hbase4:40931] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:14:57,675 INFO [RS:2;jenkins-hbase4:40931] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:14:57,675 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(3305): Received CLOSE for c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:57,676 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:57,676 DEBUG [RS:2;jenkins-hbase4:40931] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6dc3e660 to 127.0.0.1:53154 2023-07-18 10:14:57,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6fb842bd011abbe63e3755e261be5bdf, disabling compactions & flushes 2023-07-18 10:14:57,676 INFO [RS:3;jenkins-hbase4:35633] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:14:57,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c8e2eee4a7112b8e2faf0ec9b8864302, disabling compactions & flushes 2023-07-18 10:14:57,677 INFO [RS:3;jenkins-hbase4:35633] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:14:57,675 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:14:57,677 INFO [RS:3;jenkins-hbase4:35633] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:14:57,677 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:57,677 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:14:57,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:57,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:57,676 DEBUG [RS:2;jenkins-hbase4:40931] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:57,676 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(3305): Received CLOSE for 2929c6f81410eb8cdf881f05484b0086 2023-07-18 10:14:57,675 INFO [RS:1;jenkins-hbase4:40033] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:14:57,677 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(3305): Received CLOSE for c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:57,677 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 10:14:57,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:57,677 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1478): Online Regions={c8e2eee4a7112b8e2faf0ec9b8864302=testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302.} 2023-07-18 10:14:57,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. after waiting 0 ms 2023-07-18 10:14:57,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:57,677 INFO [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:57,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. after waiting 0 ms 2023-07-18 10:14:57,677 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:57,677 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:57,678 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3afe5354 to 127.0.0.1:53154 2023-07-18 10:14:57,678 DEBUG [RS:3;jenkins-hbase4:35633] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39f9e47b to 127.0.0.1:53154 2023-07-18 10:14:57,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:57,678 DEBUG [RS:0;jenkins-hbase4:42163] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:57,678 DEBUG [RS:3;jenkins-hbase4:35633] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:57,678 INFO [RS:0;jenkins-hbase4:42163] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:14:57,678 INFO [RS:0;jenkins-hbase4:42163] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:14:57,678 INFO [RS:0;jenkins-hbase4:42163] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:14:57,678 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 10:14:57,678 DEBUG [RS:1;jenkins-hbase4:40033] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4201edaf to 127.0.0.1:53154 2023-07-18 10:14:57,678 DEBUG [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1504): Waiting on c8e2eee4a7112b8e2faf0ec9b8864302 2023-07-18 10:14:57,678 INFO [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35633,1689675275991; all regions closed. 2023-07-18 10:14:57,679 DEBUG [RS:1;jenkins-hbase4:40033] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:57,679 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40033,1689675272048; all regions closed. 2023-07-18 10:14:57,682 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-18 10:14:57,682 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 6fb842bd011abbe63e3755e261be5bdf=hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf., 2929c6f81410eb8cdf881f05484b0086=unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086., c279e5fb45e4dd6ee6ca1bf14c1ea18e=hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e.} 2023-07-18 10:14:57,682 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1504): Waiting on 1588230740, 2929c6f81410eb8cdf881f05484b0086, 6fb842bd011abbe63e3755e261be5bdf, c279e5fb45e4dd6ee6ca1bf14c1ea18e 2023-07-18 10:14:57,684 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 10:14:57,684 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 10:14:57,684 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 10:14:57,684 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 10:14:57,684 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 10:14:57,684 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=38.63 KB heapSize=63 KB 2023-07-18 10:14:57,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/testRename/c8e2eee4a7112b8e2faf0ec9b8864302/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 10:14:57,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:57,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c8e2eee4a7112b8e2faf0ec9b8864302: 2023-07-18 10:14:57,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689675290785.c8e2eee4a7112b8e2faf0ec9b8864302. 2023-07-18 10:14:57,718 DEBUG [RS:1;jenkins-hbase4:40033] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs 2023-07-18 10:14:57,719 INFO [RS:1;jenkins-hbase4:40033] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40033%2C1689675272048:(num 1689675274556) 2023-07-18 10:14:57,719 DEBUG [RS:1;jenkins-hbase4:40033] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:57,719 INFO [RS:1;jenkins-hbase4:40033] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:14:57,720 DEBUG [RS:3;jenkins-hbase4:35633] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs 2023-07-18 10:14:57,720 INFO [RS:3;jenkins-hbase4:35633] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35633%2C1689675275991:(num 1689675276397) 2023-07-18 10:14:57,720 DEBUG [RS:3;jenkins-hbase4:35633] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:57,720 INFO [RS:3;jenkins-hbase4:35633] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:14:57,722 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:14:57,723 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:14:57,724 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:14:57,724 INFO [RS:3;jenkins-hbase4:35633] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 10:14:57,725 INFO [RS:3;jenkins-hbase4:35633] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:14:57,725 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:14:57,725 INFO [RS:3;jenkins-hbase4:35633] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:14:57,725 INFO [RS:3;jenkins-hbase4:35633] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:14:57,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/namespace/6fb842bd011abbe63e3755e261be5bdf/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-18 10:14:57,728 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:14:57,732 INFO [RS:1;jenkins-hbase4:40033] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 10:14:57,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:57,736 INFO [RS:3;jenkins-hbase4:35633] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35633 2023-07-18 10:14:57,736 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:14:57,736 INFO [RS:1;jenkins-hbase4:40033] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:14:57,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6fb842bd011abbe63e3755e261be5bdf: 2023-07-18 10:14:57,736 INFO [RS:1;jenkins-hbase4:40033] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:14:57,736 INFO [RS:1;jenkins-hbase4:40033] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:14:57,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689675274870.6fb842bd011abbe63e3755e261be5bdf. 2023-07-18 10:14:57,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2929c6f81410eb8cdf881f05484b0086, disabling compactions & flushes 2023-07-18 10:14:57,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:57,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:57,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. after waiting 0 ms 2023-07-18 10:14:57,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:57,747 INFO [RS:1;jenkins-hbase4:40033] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40033 2023-07-18 10:14:57,756 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:57,756 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:57,756 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:57,756 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:57,757 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:57,757 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:57,757 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:57,757 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40033,1689675272048 2023-07-18 10:14:57,757 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:57,757 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=35.70 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/info/f70ae5edf6894cc490b6a5071ac848fb 2023-07-18 10:14:57,759 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40033,1689675272048] 2023-07-18 10:14:57,759 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40033,1689675272048; numProcessing=1 2023-07-18 10:14:57,760 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:57,760 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:57,760 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:57,760 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:57,760 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35633,1689675275991 2023-07-18 10:14:57,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/default/unmovedTable/2929c6f81410eb8cdf881f05484b0086/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 10:14:57,761 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40033,1689675272048 already deleted, retry=false 2023-07-18 10:14:57,761 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40033,1689675272048 expired; onlineServers=3 2023-07-18 10:14:57,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:57,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2929c6f81410eb8cdf881f05484b0086: 2023-07-18 10:14:57,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689675292461.2929c6f81410eb8cdf881f05484b0086. 2023-07-18 10:14:57,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c279e5fb45e4dd6ee6ca1bf14c1ea18e, disabling compactions & flushes 2023-07-18 10:14:57,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:57,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:57,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. after waiting 0 ms 2023-07-18 10:14:57,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:57,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c279e5fb45e4dd6ee6ca1bf14c1ea18e 1/1 column families, dataSize=22.07 KB heapSize=36.54 KB 2023-07-18 10:14:57,769 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f70ae5edf6894cc490b6a5071ac848fb 2023-07-18 10:14:57,796 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/.tmp/m/31c83316861c440b86b75d9604dfee36 2023-07-18 10:14:57,803 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/rep_barrier/fe3fe0e71ed2471dae024f3f52728e26 2023-07-18 10:14:57,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 31c83316861c440b86b75d9604dfee36 2023-07-18 10:14:57,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/.tmp/m/31c83316861c440b86b75d9604dfee36 as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/m/31c83316861c440b86b75d9604dfee36 2023-07-18 10:14:57,813 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fe3fe0e71ed2471dae024f3f52728e26 2023-07-18 10:14:57,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 31c83316861c440b86b75d9604dfee36 2023-07-18 10:14:57,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/m/31c83316861c440b86b75d9604dfee36, entries=22, sequenceid=101, filesize=5.9 K 2023-07-18 10:14:57,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22601, heapSize ~36.52 KB/37400, currentSize=0 B/0 for c279e5fb45e4dd6ee6ca1bf14c1ea18e in 64ms, sequenceid=101, compaction requested=false 2023-07-18 10:14:57,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 10:14:57,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/rsgroup/c279e5fb45e4dd6ee6ca1bf14c1ea18e/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-18 10:14:57,838 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:14:57,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:57,838 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c279e5fb45e4dd6ee6ca1bf14c1ea18e: 2023-07-18 10:14:57,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689675275170.c279e5fb45e4dd6ee6ca1bf14c1ea18e. 2023-07-18 10:14:57,840 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/table/5c16ff9986b64f3b9a3be0028f0ab734 2023-07-18 10:14:57,846 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5c16ff9986b64f3b9a3be0028f0ab734 2023-07-18 10:14:57,847 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/info/f70ae5edf6894cc490b6a5071ac848fb as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info/f70ae5edf6894cc490b6a5071ac848fb 2023-07-18 10:14:57,854 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f70ae5edf6894cc490b6a5071ac848fb 2023-07-18 10:14:57,854 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/info/f70ae5edf6894cc490b6a5071ac848fb, entries=72, sequenceid=210, filesize=13.1 K 2023-07-18 10:14:57,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/rep_barrier/fe3fe0e71ed2471dae024f3f52728e26 as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier/fe3fe0e71ed2471dae024f3f52728e26 2023-07-18 10:14:57,861 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:57,861 INFO [RS:1;jenkins-hbase4:40033] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40033,1689675272048; zookeeper connection closed. 2023-07-18 10:14:57,861 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40033-0x10177ed05f80002, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:57,862 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@72fbced9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@72fbced9 2023-07-18 10:14:57,862 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fe3fe0e71ed2471dae024f3f52728e26 2023-07-18 10:14:57,862 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/rep_barrier/fe3fe0e71ed2471dae024f3f52728e26, entries=8, sequenceid=210, filesize=5.8 K 2023-07-18 10:14:57,863 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/.tmp/table/5c16ff9986b64f3b9a3be0028f0ab734 as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table/5c16ff9986b64f3b9a3be0028f0ab734 2023-07-18 10:14:57,871 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5c16ff9986b64f3b9a3be0028f0ab734 2023-07-18 10:14:57,872 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/table/5c16ff9986b64f3b9a3be0028f0ab734, entries=16, sequenceid=210, filesize=6.0 K 2023-07-18 10:14:57,872 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~38.63 KB/39552, heapSize ~62.95 KB/64464, currentSize=0 B/0 for 1588230740 in 188ms, sequenceid=210, compaction requested=false 2023-07-18 10:14:57,873 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 10:14:57,879 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40931,1689675272348; all regions closed. 2023-07-18 10:14:57,883 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-18 10:14:57,886 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=95 2023-07-18 10:14:57,887 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:14:57,887 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 10:14:57,888 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 10:14:57,888 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 10:14:57,888 DEBUG [RS:2;jenkins-hbase4:40931] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs 2023-07-18 10:14:57,888 INFO [RS:2;jenkins-hbase4:40931] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40931%2C1689675272348.meta:.meta(num 1689675274665) 2023-07-18 10:14:57,893 DEBUG [RS:2;jenkins-hbase4:40931] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs 2023-07-18 10:14:57,893 INFO [RS:2;jenkins-hbase4:40931] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40931%2C1689675272348:(num 1689675274556) 2023-07-18 10:14:57,893 DEBUG [RS:2;jenkins-hbase4:40931] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:57,894 INFO [RS:2;jenkins-hbase4:40931] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:14:57,894 INFO [RS:2;jenkins-hbase4:40931] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 10:14:57,894 INFO [RS:2;jenkins-hbase4:40931] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:14:57,894 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:14:57,894 INFO [RS:2;jenkins-hbase4:40931] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:14:57,894 INFO [RS:2;jenkins-hbase4:40931] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:14:57,895 INFO [RS:2;jenkins-hbase4:40931] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40931 2023-07-18 10:14:57,962 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:57,962 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:35633-0x10177ed05f8000b, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:57,962 INFO [RS:3;jenkins-hbase4:35633] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35633,1689675275991; zookeeper connection closed. 2023-07-18 10:14:57,962 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35633,1689675275991] 2023-07-18 10:14:57,963 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35633,1689675275991; numProcessing=2 2023-07-18 10:14:57,963 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:57,964 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40931,1689675272348 2023-07-18 10:14:57,964 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:57,964 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35633,1689675275991 already deleted, retry=false 2023-07-18 10:14:57,965 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35633,1689675275991 expired; onlineServers=2 2023-07-18 10:14:57,965 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40931,1689675272348] 2023-07-18 10:14:57,965 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40931,1689675272348; numProcessing=3 2023-07-18 10:14:57,968 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40931,1689675272348 already deleted, retry=false 2023-07-18 10:14:57,968 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40931,1689675272348 expired; onlineServers=1 2023-07-18 10:14:57,974 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@24ebaaa4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@24ebaaa4 2023-07-18 10:14:58,075 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:58,075 INFO [RS:2;jenkins-hbase4:40931] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40931,1689675272348; zookeeper connection closed. 2023-07-18 10:14:58,075 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10177ed05f80003, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:58,083 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42163,1689675271845; all regions closed. 2023-07-18 10:14:58,084 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6a05fed5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6a05fed5 2023-07-18 10:14:58,091 DEBUG [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs 2023-07-18 10:14:58,091 INFO [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42163%2C1689675271845.meta:.meta(num 1689675281832) 2023-07-18 10:14:58,100 DEBUG [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/oldWALs 2023-07-18 10:14:58,100 INFO [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42163%2C1689675271845:(num 1689675274556) 2023-07-18 10:14:58,100 DEBUG [RS:0;jenkins-hbase4:42163] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:58,100 INFO [RS:0;jenkins-hbase4:42163] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:14:58,100 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 10:14:58,100 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:14:58,101 INFO [RS:0;jenkins-hbase4:42163] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42163 2023-07-18 10:14:58,104 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42163,1689675271845 2023-07-18 10:14:58,104 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:14:58,105 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42163,1689675271845] 2023-07-18 10:14:58,106 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42163,1689675271845; numProcessing=4 2023-07-18 10:14:58,107 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42163,1689675271845 already deleted, retry=false 2023-07-18 10:14:58,107 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42163,1689675271845 expired; onlineServers=0 2023-07-18 10:14:58,107 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42907,1689675269765' ***** 2023-07-18 10:14:58,107 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 10:14:58,108 DEBUG [M:0;jenkins-hbase4:42907] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@56885367, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:14:58,108 INFO [M:0;jenkins-hbase4:42907] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:14:58,110 INFO [M:0;jenkins-hbase4:42907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@436587aa{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 10:14:58,111 INFO [M:0;jenkins-hbase4:42907] server.AbstractConnector(383): Stopped ServerConnector@1f13a933{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:14:58,111 INFO [M:0;jenkins-hbase4:42907] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:14:58,112 INFO [M:0;jenkins-hbase4:42907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@390c5cdd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:14:58,112 INFO [M:0;jenkins-hbase4:42907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7936602a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir/,STOPPED} 2023-07-18 10:14:58,113 INFO [M:0;jenkins-hbase4:42907] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42907,1689675269765 2023-07-18 10:14:58,113 INFO [M:0;jenkins-hbase4:42907] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42907,1689675269765; all regions closed. 2023-07-18 10:14:58,113 DEBUG [M:0;jenkins-hbase4:42907] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:14:58,113 INFO [M:0;jenkins-hbase4:42907] master.HMaster(1491): Stopping master jetty server 2023-07-18 10:14:58,113 INFO [M:0;jenkins-hbase4:42907] server.AbstractConnector(383): Stopped ServerConnector@6e640ae4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:14:58,113 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 10:14:58,113 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:14:58,114 DEBUG [M:0;jenkins-hbase4:42907] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 10:14:58,114 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 10:14:58,114 DEBUG [M:0;jenkins-hbase4:42907] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 10:14:58,114 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675274039] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675274039,5,FailOnTimeoutGroup] 2023-07-18 10:14:58,114 INFO [M:0;jenkins-hbase4:42907] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 10:14:58,114 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:14:58,114 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675274039] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675274039,5,FailOnTimeoutGroup] 2023-07-18 10:14:58,114 INFO [M:0;jenkins-hbase4:42907] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 10:14:58,115 INFO [M:0;jenkins-hbase4:42907] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 10:14:58,115 DEBUG [M:0;jenkins-hbase4:42907] master.HMaster(1512): Stopping service threads 2023-07-18 10:14:58,115 INFO [M:0;jenkins-hbase4:42907] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 10:14:58,116 ERROR [M:0;jenkins-hbase4:42907] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-18 10:14:58,116 INFO [M:0;jenkins-hbase4:42907] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 10:14:58,116 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 10:14:58,117 DEBUG [M:0;jenkins-hbase4:42907] zookeeper.ZKUtil(398): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 10:14:58,117 WARN [M:0;jenkins-hbase4:42907] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 10:14:58,117 INFO [M:0;jenkins-hbase4:42907] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 10:14:58,117 INFO [M:0;jenkins-hbase4:42907] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 10:14:58,118 DEBUG [M:0;jenkins-hbase4:42907] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 10:14:58,118 INFO [M:0;jenkins-hbase4:42907] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:14:58,118 DEBUG [M:0;jenkins-hbase4:42907] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:14:58,118 DEBUG [M:0;jenkins-hbase4:42907] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 10:14:58,118 DEBUG [M:0;jenkins-hbase4:42907] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:14:58,118 INFO [M:0;jenkins-hbase4:42907] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.01 KB heapSize=621.06 KB 2023-07-18 10:14:58,137 INFO [M:0;jenkins-hbase4:42907] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.01 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9cdffe5ac1a840e1a31a58065c15a318 2023-07-18 10:14:58,144 DEBUG [M:0;jenkins-hbase4:42907] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9cdffe5ac1a840e1a31a58065c15a318 as hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9cdffe5ac1a840e1a31a58065c15a318 2023-07-18 10:14:58,151 INFO [M:0;jenkins-hbase4:42907] regionserver.HStore(1080): Added hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9cdffe5ac1a840e1a31a58065c15a318, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-18 10:14:58,152 INFO [M:0;jenkins-hbase4:42907] regionserver.HRegion(2948): Finished flush of dataSize ~519.01 KB/531462, heapSize ~621.05 KB/635952, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 34ms, sequenceid=1152, compaction requested=false 2023-07-18 10:14:58,154 INFO [M:0;jenkins-hbase4:42907] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:14:58,154 DEBUG [M:0;jenkins-hbase4:42907] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 10:14:58,164 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:14:58,164 INFO [M:0;jenkins-hbase4:42907] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 10:14:58,165 INFO [M:0;jenkins-hbase4:42907] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42907 2023-07-18 10:14:58,168 DEBUG [M:0;jenkins-hbase4:42907] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42907,1689675269765 already deleted, retry=false 2023-07-18 10:14:58,250 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:58,250 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42163,1689675271845; zookeeper connection closed. 2023-07-18 10:14:58,250 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x10177ed05f80001, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:58,250 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@718919a0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@718919a0 2023-07-18 10:14:58,250 INFO [Listener at localhost/45689] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 10:14:58,350 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:58,350 INFO [M:0;jenkins-hbase4:42907] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42907,1689675269765; zookeeper connection closed. 2023-07-18 10:14:58,350 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): master:42907-0x10177ed05f80000, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:14:58,352 WARN [Listener at localhost/45689] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 10:14:58,360 INFO [Listener at localhost/45689] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:14:58,466 WARN [BP-1078778366-172.31.14.131-1689675266234 heartbeating to localhost/127.0.0.1:38869] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 10:14:58,467 WARN [BP-1078778366-172.31.14.131-1689675266234 heartbeating to localhost/127.0.0.1:38869] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1078778366-172.31.14.131-1689675266234 (Datanode Uuid 30525c1c-db5d-4f26-a1c7-6faee95ac827) service to localhost/127.0.0.1:38869 2023-07-18 10:14:58,469 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data5/current/BP-1078778366-172.31.14.131-1689675266234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:14:58,469 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data6/current/BP-1078778366-172.31.14.131-1689675266234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:14:58,471 WARN [Listener at localhost/45689] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 10:14:58,474 INFO [Listener at localhost/45689] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:14:58,520 WARN [BP-1078778366-172.31.14.131-1689675266234 heartbeating to localhost/127.0.0.1:38869] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1078778366-172.31.14.131-1689675266234 (Datanode Uuid e1f54303-a893-42d2-840d-6c8ceb04f86c) service to localhost/127.0.0.1:38869 2023-07-18 10:14:58,521 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data3/current/BP-1078778366-172.31.14.131-1689675266234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:14:58,521 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data4/current/BP-1078778366-172.31.14.131-1689675266234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:14:58,579 WARN [Listener at localhost/45689] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 10:14:58,581 INFO [Listener at localhost/45689] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:14:58,690 WARN [BP-1078778366-172.31.14.131-1689675266234 heartbeating to localhost/127.0.0.1:38869] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 10:14:58,690 WARN [BP-1078778366-172.31.14.131-1689675266234 heartbeating to localhost/127.0.0.1:38869] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1078778366-172.31.14.131-1689675266234 (Datanode Uuid 747d601c-3feb-4b95-918b-50fbb899c0cd) service to localhost/127.0.0.1:38869 2023-07-18 10:14:58,691 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data1/current/BP-1078778366-172.31.14.131-1689675266234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:14:58,691 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/cluster_1171a87e-3be3-e79e-982b-e0db3fcae7ba/dfs/data/data2/current/BP-1078778366-172.31.14.131-1689675266234] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:14:58,725 INFO [Listener at localhost/45689] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:14:58,754 INFO [Listener at localhost/45689] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 10:14:58,818 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 10:14:58,818 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 10:14:58,818 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.log.dir so I do NOT create it in target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c 2023-07-18 10:14:58,818 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9b1fcaf1-c393-3f9c-dea6-169953fe1c96/hadoop.tmp.dir so I do NOT create it in target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c 2023-07-18 10:14:58,818 INFO [Listener at localhost/45689] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625, deleteOnExit=true 2023-07-18 10:14:58,818 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 10:14:58,819 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/test.cache.data in system properties and HBase conf 2023-07-18 10:14:58,819 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 10:14:58,819 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir in system properties and HBase conf 2023-07-18 10:14:58,819 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 10:14:58,819 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 10:14:58,820 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 10:14:58,820 DEBUG [Listener at localhost/45689] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 10:14:58,820 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 10:14:58,820 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 10:14:58,820 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 10:14:58,821 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 10:14:58,821 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 10:14:58,821 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 10:14:58,821 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 10:14:58,821 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 10:14:58,821 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 10:14:58,822 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/nfs.dump.dir in system properties and HBase conf 2023-07-18 10:14:58,822 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/java.io.tmpdir in system properties and HBase conf 2023-07-18 10:14:58,822 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 10:14:58,822 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 10:14:58,822 INFO [Listener at localhost/45689] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 10:14:58,828 WARN [Listener at localhost/45689] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 10:14:58,828 WARN [Listener at localhost/45689] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 10:14:58,851 DEBUG [Listener at localhost/45689-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10177ed05f8000a, quorum=127.0.0.1:53154, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 10:14:58,858 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10177ed05f8000a, quorum=127.0.0.1:53154, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 10:14:58,896 WARN [Listener at localhost/45689] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:14:58,898 INFO [Listener at localhost/45689] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:14:58,904 INFO [Listener at localhost/45689] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/java.io.tmpdir/Jetty_localhost_45633_hdfs____.9exbvl/webapp 2023-07-18 10:14:59,001 INFO [Listener at localhost/45689] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45633 2023-07-18 10:14:59,038 WARN [Listener at localhost/45689] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 10:14:59,039 WARN [Listener at localhost/45689] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 10:14:59,123 WARN [Listener at localhost/43981] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:14:59,175 WARN [Listener at localhost/43981] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 10:14:59,178 WARN [Listener at localhost/43981] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:14:59,179 INFO [Listener at localhost/43981] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:14:59,185 INFO [Listener at localhost/43981] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/java.io.tmpdir/Jetty_localhost_42961_datanode____nk8ifx/webapp 2023-07-18 10:14:59,319 INFO [Listener at localhost/43981] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42961 2023-07-18 10:14:59,346 WARN [Listener at localhost/45951] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:14:59,377 WARN [Listener at localhost/45951] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 10:14:59,380 WARN [Listener at localhost/45951] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:14:59,382 INFO [Listener at localhost/45951] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:14:59,386 INFO [Listener at localhost/45951] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/java.io.tmpdir/Jetty_localhost_43823_datanode____dxc51d/webapp 2023-07-18 10:14:59,471 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xee589403351dfdc: Processing first storage report for DS-7e436a9b-6fa1-42f4-a69d-de298812063d from datanode 36b2f150-5656-4ef1-b3e5-1693ce8dc9f2 2023-07-18 10:14:59,472 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xee589403351dfdc: from storage DS-7e436a9b-6fa1-42f4-a69d-de298812063d node DatanodeRegistration(127.0.0.1:34075, datanodeUuid=36b2f150-5656-4ef1-b3e5-1693ce8dc9f2, infoPort=45681, infoSecurePort=0, ipcPort=45951, storageInfo=lv=-57;cid=testClusterID;nsid=1098668287;c=1689675298830), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:59,472 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xee589403351dfdc: Processing first storage report for DS-81fbbf16-b748-44c5-a30b-34f5d2b71f8a from datanode 36b2f150-5656-4ef1-b3e5-1693ce8dc9f2 2023-07-18 10:14:59,472 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xee589403351dfdc: from storage DS-81fbbf16-b748-44c5-a30b-34f5d2b71f8a node DatanodeRegistration(127.0.0.1:34075, datanodeUuid=36b2f150-5656-4ef1-b3e5-1693ce8dc9f2, infoPort=45681, infoSecurePort=0, ipcPort=45951, storageInfo=lv=-57;cid=testClusterID;nsid=1098668287;c=1689675298830), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:59,515 INFO [Listener at localhost/45951] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43823 2023-07-18 10:14:59,530 WARN [Listener at localhost/45307] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:14:59,577 WARN [Listener at localhost/45307] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 10:14:59,582 WARN [Listener at localhost/45307] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:14:59,584 INFO [Listener at localhost/45307] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:14:59,588 INFO [Listener at localhost/45307] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/java.io.tmpdir/Jetty_localhost_45967_datanode____.uvqww0/webapp 2023-07-18 10:14:59,678 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6632dad3af397d5b: Processing first storage report for DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801 from datanode 25ad4980-a3f4-44a3-b19a-c63dae902ce5 2023-07-18 10:14:59,678 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6632dad3af397d5b: from storage DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801 node DatanodeRegistration(127.0.0.1:39159, datanodeUuid=25ad4980-a3f4-44a3-b19a-c63dae902ce5, infoPort=46349, infoSecurePort=0, ipcPort=45307, storageInfo=lv=-57;cid=testClusterID;nsid=1098668287;c=1689675298830), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:59,678 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6632dad3af397d5b: Processing first storage report for DS-9932c0ce-ff2b-4929-948c-c296860776bd from datanode 25ad4980-a3f4-44a3-b19a-c63dae902ce5 2023-07-18 10:14:59,678 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6632dad3af397d5b: from storage DS-9932c0ce-ff2b-4929-948c-c296860776bd node DatanodeRegistration(127.0.0.1:39159, datanodeUuid=25ad4980-a3f4-44a3-b19a-c63dae902ce5, infoPort=46349, infoSecurePort=0, ipcPort=45307, storageInfo=lv=-57;cid=testClusterID;nsid=1098668287;c=1689675298830), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:59,706 INFO [Listener at localhost/45307] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45967 2023-07-18 10:14:59,717 WARN [Listener at localhost/40599] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:14:59,836 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5864345911e87eca: Processing first storage report for DS-e245721d-4073-4544-a10d-bdc7da090b29 from datanode 44ec7168-64c5-412f-bfaf-71023fcf60af 2023-07-18 10:14:59,836 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5864345911e87eca: from storage DS-e245721d-4073-4544-a10d-bdc7da090b29 node DatanodeRegistration(127.0.0.1:40959, datanodeUuid=44ec7168-64c5-412f-bfaf-71023fcf60af, infoPort=43087, infoSecurePort=0, ipcPort=40599, storageInfo=lv=-57;cid=testClusterID;nsid=1098668287;c=1689675298830), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:59,836 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5864345911e87eca: Processing first storage report for DS-8f330e0d-1178-484f-8b6f-8e72ebe2b3e4 from datanode 44ec7168-64c5-412f-bfaf-71023fcf60af 2023-07-18 10:14:59,836 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5864345911e87eca: from storage DS-8f330e0d-1178-484f-8b6f-8e72ebe2b3e4 node DatanodeRegistration(127.0.0.1:40959, datanodeUuid=44ec7168-64c5-412f-bfaf-71023fcf60af, infoPort=43087, infoSecurePort=0, ipcPort=40599, storageInfo=lv=-57;cid=testClusterID;nsid=1098668287;c=1689675298830), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:14:59,930 DEBUG [Listener at localhost/40599] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c 2023-07-18 10:14:59,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:14:59,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 10:14:59,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 10:14:59,962 INFO [Listener at localhost/40599] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625/zookeeper_0, clientPort=59011, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 10:14:59,970 INFO [Listener at localhost/40599] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59011 2023-07-18 10:14:59,970 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:14:59,972 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,036 INFO [Listener at localhost/40599] util.FSUtils(471): Created version file at hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6 with version=8 2023-07-18 10:15:00,036 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/hbase-staging 2023-07-18 10:15:00,037 DEBUG [Listener at localhost/40599] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 10:15:00,037 DEBUG [Listener at localhost/40599] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 10:15:00,037 DEBUG [Listener at localhost/40599] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 10:15:00,037 DEBUG [Listener at localhost/40599] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 10:15:00,039 INFO [Listener at localhost/40599] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:15:00,039 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,039 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,039 INFO [Listener at localhost/40599] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:15:00,039 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,039 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:15:00,040 INFO [Listener at localhost/40599] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:15:00,040 INFO [Listener at localhost/40599] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42475 2023-07-18 10:15:00,041 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,043 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,044 INFO [Listener at localhost/40599] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42475 connecting to ZooKeeper ensemble=127.0.0.1:59011 2023-07-18 10:15:00,055 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:424750x0, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:00,056 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42475-0x10177ed7f730000 connected 2023-07-18 10:15:00,082 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:00,087 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:00,087 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:15:00,092 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42475 2023-07-18 10:15:00,094 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42475 2023-07-18 10:15:00,094 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42475 2023-07-18 10:15:00,096 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42475 2023-07-18 10:15:00,096 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42475 2023-07-18 10:15:00,099 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:15:00,099 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:15:00,100 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:15:00,100 INFO [Listener at localhost/40599] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 10:15:00,100 INFO [Listener at localhost/40599] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:15:00,100 INFO [Listener at localhost/40599] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:15:00,101 INFO [Listener at localhost/40599] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:15:00,102 INFO [Listener at localhost/40599] http.HttpServer(1146): Jetty bound to port 34045 2023-07-18 10:15:00,102 INFO [Listener at localhost/40599] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:00,119 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,120 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4da0099e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:15:00,120 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,120 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@611aa7c8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:15:00,258 INFO [Listener at localhost/40599] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:15:00,259 INFO [Listener at localhost/40599] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:15:00,260 INFO [Listener at localhost/40599] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:15:00,260 INFO [Listener at localhost/40599] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 10:15:00,261 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,263 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@220815cc{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/java.io.tmpdir/jetty-0_0_0_0-34045-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4724173245368935099/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 10:15:00,264 INFO [Listener at localhost/40599] server.AbstractConnector(333): Started ServerConnector@44e94428{HTTP/1.1, (http/1.1)}{0.0.0.0:34045} 2023-07-18 10:15:00,265 INFO [Listener at localhost/40599] server.Server(415): Started @35976ms 2023-07-18 10:15:00,265 INFO [Listener at localhost/40599] master.HMaster(444): hbase.rootdir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6, hbase.cluster.distributed=false 2023-07-18 10:15:00,284 INFO [Listener at localhost/40599] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:15:00,285 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,285 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,285 INFO [Listener at localhost/40599] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:15:00,285 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,285 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:15:00,285 INFO [Listener at localhost/40599] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:15:00,286 INFO [Listener at localhost/40599] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35223 2023-07-18 10:15:00,286 INFO [Listener at localhost/40599] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:15:00,288 DEBUG [Listener at localhost/40599] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:15:00,288 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,289 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,290 INFO [Listener at localhost/40599] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35223 connecting to ZooKeeper ensemble=127.0.0.1:59011 2023-07-18 10:15:00,293 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:352230x0, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:00,295 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35223-0x10177ed7f730001 connected 2023-07-18 10:15:00,295 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:00,296 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:00,296 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:15:00,297 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35223 2023-07-18 10:15:00,297 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35223 2023-07-18 10:15:00,297 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35223 2023-07-18 10:15:00,299 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35223 2023-07-18 10:15:00,299 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35223 2023-07-18 10:15:00,301 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:15:00,301 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:15:00,301 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:15:00,301 INFO [Listener at localhost/40599] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:15:00,301 INFO [Listener at localhost/40599] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:15:00,301 INFO [Listener at localhost/40599] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:15:00,302 INFO [Listener at localhost/40599] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:15:00,303 INFO [Listener at localhost/40599] http.HttpServer(1146): Jetty bound to port 37277 2023-07-18 10:15:00,303 INFO [Listener at localhost/40599] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:00,304 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,305 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2a4ef6d8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:15:00,305 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,305 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39385d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:15:00,444 INFO [Listener at localhost/40599] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:15:00,445 INFO [Listener at localhost/40599] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:15:00,445 INFO [Listener at localhost/40599] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:15:00,446 INFO [Listener at localhost/40599] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 10:15:00,447 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,448 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@8711218{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/java.io.tmpdir/jetty-0_0_0_0-37277-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4780407194666473865/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:00,449 INFO [Listener at localhost/40599] server.AbstractConnector(333): Started ServerConnector@a8ccbde{HTTP/1.1, (http/1.1)}{0.0.0.0:37277} 2023-07-18 10:15:00,449 INFO [Listener at localhost/40599] server.Server(415): Started @36161ms 2023-07-18 10:15:00,467 INFO [Listener at localhost/40599] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:15:00,467 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,468 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,468 INFO [Listener at localhost/40599] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:15:00,468 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,468 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:15:00,468 INFO [Listener at localhost/40599] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:15:00,469 INFO [Listener at localhost/40599] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38763 2023-07-18 10:15:00,469 INFO [Listener at localhost/40599] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:15:00,471 DEBUG [Listener at localhost/40599] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:15:00,472 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,473 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,473 INFO [Listener at localhost/40599] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38763 connecting to ZooKeeper ensemble=127.0.0.1:59011 2023-07-18 10:15:00,479 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:387630x0, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:00,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38763-0x10177ed7f730002 connected 2023-07-18 10:15:00,481 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:00,484 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:00,485 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:15:00,490 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38763 2023-07-18 10:15:00,493 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38763 2023-07-18 10:15:00,494 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38763 2023-07-18 10:15:00,498 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38763 2023-07-18 10:15:00,498 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38763 2023-07-18 10:15:00,501 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:15:00,501 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:15:00,501 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:15:00,501 INFO [Listener at localhost/40599] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:15:00,501 INFO [Listener at localhost/40599] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:15:00,502 INFO [Listener at localhost/40599] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:15:00,502 INFO [Listener at localhost/40599] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:15:00,502 INFO [Listener at localhost/40599] http.HttpServer(1146): Jetty bound to port 34423 2023-07-18 10:15:00,502 INFO [Listener at localhost/40599] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:00,507 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,508 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ac126e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:15:00,508 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,508 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6c0f5bac{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:15:00,645 INFO [Listener at localhost/40599] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:15:00,647 INFO [Listener at localhost/40599] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:15:00,648 INFO [Listener at localhost/40599] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:15:00,648 INFO [Listener at localhost/40599] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 10:15:00,649 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,650 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@51a3754a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/java.io.tmpdir/jetty-0_0_0_0-34423-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8477456259780335074/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:00,651 INFO [Listener at localhost/40599] server.AbstractConnector(333): Started ServerConnector@19e45c9f{HTTP/1.1, (http/1.1)}{0.0.0.0:34423} 2023-07-18 10:15:00,652 INFO [Listener at localhost/40599] server.Server(415): Started @36363ms 2023-07-18 10:15:00,665 INFO [Listener at localhost/40599] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:15:00,665 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,665 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,666 INFO [Listener at localhost/40599] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:15:00,666 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:00,666 INFO [Listener at localhost/40599] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:15:00,666 INFO [Listener at localhost/40599] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:15:00,666 INFO [Listener at localhost/40599] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43961 2023-07-18 10:15:00,667 INFO [Listener at localhost/40599] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:15:00,673 DEBUG [Listener at localhost/40599] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:15:00,673 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,674 INFO [Listener at localhost/40599] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,675 INFO [Listener at localhost/40599] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43961 connecting to ZooKeeper ensemble=127.0.0.1:59011 2023-07-18 10:15:00,680 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:439610x0, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:00,682 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): regionserver:439610x0, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:00,683 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43961-0x10177ed7f730003 connected 2023-07-18 10:15:00,683 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:00,684 DEBUG [Listener at localhost/40599] zookeeper.ZKUtil(164): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:15:00,685 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43961 2023-07-18 10:15:00,685 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43961 2023-07-18 10:15:00,685 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43961 2023-07-18 10:15:00,686 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43961 2023-07-18 10:15:00,686 DEBUG [Listener at localhost/40599] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43961 2023-07-18 10:15:00,688 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:15:00,688 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:15:00,688 INFO [Listener at localhost/40599] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:15:00,689 INFO [Listener at localhost/40599] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:15:00,689 INFO [Listener at localhost/40599] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:15:00,689 INFO [Listener at localhost/40599] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:15:00,689 INFO [Listener at localhost/40599] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:15:00,690 INFO [Listener at localhost/40599] http.HttpServer(1146): Jetty bound to port 45409 2023-07-18 10:15:00,690 INFO [Listener at localhost/40599] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:00,698 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,698 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@24a41ef0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:15:00,699 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,699 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@445d8bcc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:15:00,818 INFO [Listener at localhost/40599] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:15:00,819 INFO [Listener at localhost/40599] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:15:00,819 INFO [Listener at localhost/40599] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:15:00,819 INFO [Listener at localhost/40599] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 10:15:00,820 INFO [Listener at localhost/40599] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:00,821 INFO [Listener at localhost/40599] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@309982d1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/java.io.tmpdir/jetty-0_0_0_0-45409-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6285393508154996994/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:00,822 INFO [Listener at localhost/40599] server.AbstractConnector(333): Started ServerConnector@244c8211{HTTP/1.1, (http/1.1)}{0.0.0.0:45409} 2023-07-18 10:15:00,822 INFO [Listener at localhost/40599] server.Server(415): Started @36534ms 2023-07-18 10:15:00,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:00,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@28fd2f{HTTP/1.1, (http/1.1)}{0.0.0.0:40395} 2023-07-18 10:15:00,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @36541ms 2023-07-18 10:15:00,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:00,831 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 10:15:00,832 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:00,833 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:00,833 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:00,833 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:00,833 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:00,833 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:00,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 10:15:00,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 10:15:00,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42475,1689675300038 from backup master directory 2023-07-18 10:15:00,838 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:00,838 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:15:00,838 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 10:15:00,838 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:00,854 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-18 10:15:00,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/hbase.id with ID: c4997751-476c-4f02-a011-b2623c4daa9d 2023-07-18 10:15:00,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:00,878 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:00,894 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x57d6ad96 to 127.0.0.1:59011 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:00,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22d4f763, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:00,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:00,900 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 10:15:00,900 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:00,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/data/master/store-tmp 2023-07-18 10:15:00,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:00,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 10:15:00,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:00,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:00,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 10:15:00,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:00,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:00,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 10:15:00,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/WALs/jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:00,915 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42475%2C1689675300038, suffix=, logDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/WALs/jenkins-hbase4.apache.org,42475,1689675300038, archiveDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/oldWALs, maxLogs=10 2023-07-18 10:15:00,930 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK] 2023-07-18 10:15:00,931 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK] 2023-07-18 10:15:00,932 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK] 2023-07-18 10:15:00,939 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/WALs/jenkins-hbase4.apache.org,42475,1689675300038/jenkins-hbase4.apache.org%2C42475%2C1689675300038.1689675300915 2023-07-18 10:15:00,940 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK], DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK]] 2023-07-18 10:15:00,940 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:00,940 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:00,940 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:00,940 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:00,943 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:00,944 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 10:15:00,945 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 10:15:00,947 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:00,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:00,954 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:00,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:00,958 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:00,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9650100160, jitterRate=-0.10126438736915588}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:15:00,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 10:15:00,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 10:15:00,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 10:15:00,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 10:15:00,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 10:15:00,961 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 10:15:00,961 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 10:15:00,961 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 10:15:00,965 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 10:15:00,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 10:15:00,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 10:15:00,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 10:15:00,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 10:15:00,969 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:00,970 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 10:15:00,970 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 10:15:00,971 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 10:15:00,973 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:00,973 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:00,973 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:00,973 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:00,973 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:00,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42475,1689675300038, sessionid=0x10177ed7f730000, setting cluster-up flag (Was=false) 2023-07-18 10:15:00,979 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:00,987 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 10:15:00,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:00,991 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:00,997 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 10:15:00,998 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:00,999 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.hbase-snapshot/.tmp 2023-07-18 10:15:01,000 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 10:15:01,000 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 10:15:01,001 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 10:15:01,002 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:15:01,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 10:15:01,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-18 10:15:01,004 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 10:15:01,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 10:15:01,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 10:15:01,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 10:15:01,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 10:15:01,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:15:01,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:15:01,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:15:01,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:15:01,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 10:15:01,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:15:01,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689675331018 2023-07-18 10:15:01,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 10:15:01,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 10:15:01,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 10:15:01,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 10:15:01,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 10:15:01,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 10:15:01,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,020 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 10:15:01,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 10:15:01,020 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 10:15:01,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 10:15:01,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 10:15:01,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 10:15:01,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 10:15:01,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675301021,5,FailOnTimeoutGroup] 2023-07-18 10:15:01,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675301021,5,FailOnTimeoutGroup] 2023-07-18 10:15:01,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 10:15:01,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,022 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:01,025 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(951): ClusterId : c4997751-476c-4f02-a011-b2623c4daa9d 2023-07-18 10:15:01,025 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(951): ClusterId : c4997751-476c-4f02-a011-b2623c4daa9d 2023-07-18 10:15:01,026 DEBUG [RS:1;jenkins-hbase4:38763] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:15:01,029 INFO [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(951): ClusterId : c4997751-476c-4f02-a011-b2623c4daa9d 2023-07-18 10:15:01,029 DEBUG [RS:0;jenkins-hbase4:35223] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:15:01,031 DEBUG [RS:2;jenkins-hbase4:43961] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:15:01,034 DEBUG [RS:1;jenkins-hbase4:38763] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:15:01,034 DEBUG [RS:1;jenkins-hbase4:38763] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:15:01,034 DEBUG [RS:2;jenkins-hbase4:43961] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:15:01,034 DEBUG [RS:2;jenkins-hbase4:43961] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:15:01,042 DEBUG [RS:0;jenkins-hbase4:35223] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:15:01,042 DEBUG [RS:0;jenkins-hbase4:35223] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:15:01,044 DEBUG [RS:2;jenkins-hbase4:43961] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:15:01,045 DEBUG [RS:1;jenkins-hbase4:38763] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:15:01,047 DEBUG [RS:0;jenkins-hbase4:35223] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:15:01,047 DEBUG [RS:2;jenkins-hbase4:43961] zookeeper.ReadOnlyZKClient(139): Connect 0x28c7ef89 to 127.0.0.1:59011 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:01,047 DEBUG [RS:1;jenkins-hbase4:38763] zookeeper.ReadOnlyZKClient(139): Connect 0x6adf5391 to 127.0.0.1:59011 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:01,047 DEBUG [RS:0;jenkins-hbase4:35223] zookeeper.ReadOnlyZKClient(139): Connect 0x108c95bd to 127.0.0.1:59011 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:01,073 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:01,074 DEBUG [RS:2;jenkins-hbase4:43961] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@260004e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:01,074 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:01,074 DEBUG [RS:2;jenkins-hbase4:43961] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@223f84f9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:15:01,074 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6 2023-07-18 10:15:01,076 DEBUG [RS:1;jenkins-hbase4:38763] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@602738e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:01,076 DEBUG [RS:1;jenkins-hbase4:38763] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2525ba81, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:15:01,076 DEBUG [RS:0;jenkins-hbase4:35223] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e4b69dc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:01,076 DEBUG [RS:0;jenkins-hbase4:35223] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a74fecd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:15:01,084 DEBUG [RS:2;jenkins-hbase4:43961] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43961 2023-07-18 10:15:01,084 INFO [RS:2;jenkins-hbase4:43961] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:15:01,084 INFO [RS:2;jenkins-hbase4:43961] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:15:01,084 DEBUG [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:15:01,085 INFO [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42475,1689675300038 with isa=jenkins-hbase4.apache.org/172.31.14.131:43961, startcode=1689675300665 2023-07-18 10:15:01,085 DEBUG [RS:2;jenkins-hbase4:43961] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:15:01,086 DEBUG [RS:1;jenkins-hbase4:38763] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38763 2023-07-18 10:15:01,086 INFO [RS:1;jenkins-hbase4:38763] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:15:01,086 INFO [RS:1;jenkins-hbase4:38763] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:15:01,086 DEBUG [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:15:01,087 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42475,1689675300038 with isa=jenkins-hbase4.apache.org/172.31.14.131:38763, startcode=1689675300467 2023-07-18 10:15:01,087 DEBUG [RS:1;jenkins-hbase4:38763] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:15:01,090 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43067, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:15:01,091 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44331, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:15:01,093 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35223 2023-07-18 10:15:01,093 INFO [RS:0;jenkins-hbase4:35223] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:15:01,093 INFO [RS:0;jenkins-hbase4:35223] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:15:01,093 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:15:01,093 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42475,1689675300038 with isa=jenkins-hbase4.apache.org/172.31.14.131:35223, startcode=1689675300284 2023-07-18 10:15:01,094 DEBUG [RS:0;jenkins-hbase4:35223] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:15:01,100 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42475] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:01,100 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49881, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:15:01,100 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:15:01,101 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 10:15:01,101 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42475] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,101 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:15:01,102 DEBUG [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6 2023-07-18 10:15:01,102 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42475] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,102 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 10:15:01,102 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:15:01,102 DEBUG [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6 2023-07-18 10:15:01,102 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 10:15:01,102 DEBUG [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43981 2023-07-18 10:15:01,102 DEBUG [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43981 2023-07-18 10:15:01,102 DEBUG [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34045 2023-07-18 10:15:01,102 DEBUG [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34045 2023-07-18 10:15:01,102 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6 2023-07-18 10:15:01,102 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43981 2023-07-18 10:15:01,102 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34045 2023-07-18 10:15:01,108 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:01,110 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:01,110 DEBUG [RS:1;jenkins-hbase4:38763] zookeeper.ZKUtil(162): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,110 WARN [RS:1;jenkins-hbase4:38763] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:15:01,110 DEBUG [RS:2;jenkins-hbase4:43961] zookeeper.ZKUtil(162): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:01,112 INFO [RS:1;jenkins-hbase4:38763] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:01,112 DEBUG [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,113 DEBUG [RS:0;jenkins-hbase4:35223] zookeeper.ZKUtil(162): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,113 WARN [RS:0;jenkins-hbase4:35223] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:15:01,113 INFO [RS:0;jenkins-hbase4:35223] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:01,113 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,112 WARN [RS:2;jenkins-hbase4:43961] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:15:01,113 INFO [RS:2;jenkins-hbase4:43961] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:01,113 DEBUG [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:01,114 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43961,1689675300665] 2023-07-18 10:15:01,114 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35223,1689675300284] 2023-07-18 10:15:01,114 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38763,1689675300467] 2023-07-18 10:15:01,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 10:15:01,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/info 2023-07-18 10:15:01,123 DEBUG [RS:1;jenkins-hbase4:38763] zookeeper.ZKUtil(162): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:01,123 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 10:15:01,123 DEBUG [RS:0;jenkins-hbase4:35223] zookeeper.ZKUtil(162): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:01,123 DEBUG [RS:1;jenkins-hbase4:38763] zookeeper.ZKUtil(162): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,124 DEBUG [RS:2;jenkins-hbase4:43961] zookeeper.ZKUtil(162): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:01,124 DEBUG [RS:0;jenkins-hbase4:35223] zookeeper.ZKUtil(162): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,124 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:01,124 DEBUG [RS:1;jenkins-hbase4:38763] zookeeper.ZKUtil(162): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,125 DEBUG [RS:2;jenkins-hbase4:43961] zookeeper.ZKUtil(162): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,125 DEBUG [RS:0;jenkins-hbase4:35223] zookeeper.ZKUtil(162): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,125 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 10:15:01,125 DEBUG [RS:2;jenkins-hbase4:43961] zookeeper.ZKUtil(162): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,126 DEBUG [RS:1;jenkins-hbase4:38763] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:15:01,126 INFO [RS:1;jenkins-hbase4:38763] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:15:01,126 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:15:01,126 DEBUG [RS:2;jenkins-hbase4:43961] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:15:01,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:15:01,127 INFO [RS:2;jenkins-hbase4:43961] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:15:01,128 INFO [RS:1;jenkins-hbase4:38763] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:15:01,128 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 10:15:01,128 INFO [RS:1;jenkins-hbase4:38763] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:15:01,128 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,128 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:15:01,127 INFO [RS:0;jenkins-hbase4:35223] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:15:01,130 INFO [RS:2;jenkins-hbase4:43961] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:15:01,130 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,130 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:01,130 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,131 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 10:15:01,131 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,131 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,131 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,131 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,131 INFO [RS:2;jenkins-hbase4:43961] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:15:01,131 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,131 INFO [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:15:01,132 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/table 2023-07-18 10:15:01,132 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 10:15:01,133 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:01,131 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:15:01,135 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,135 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,135 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,135 DEBUG [RS:1;jenkins-hbase4:38763] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,139 INFO [RS:0;jenkins-hbase4:35223] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:15:01,141 INFO [RS:0;jenkins-hbase4:35223] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:15:01,141 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,141 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,141 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,141 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,141 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,141 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:15:01,141 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,141 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740 2023-07-18 10:15:01,141 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,142 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,142 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,142 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,142 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,142 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:15:01,142 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740 2023-07-18 10:15:01,142 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,142 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,143 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:2;jenkins-hbase4:43961] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,143 DEBUG [RS:0;jenkins-hbase4:35223] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:01,145 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,145 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,145 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,146 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,147 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 10:15:01,149 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 10:15:01,150 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,151 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,151 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,151 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,154 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:01,155 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10904118880, jitterRate=0.015525206923484802}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 10:15:01,155 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 10:15:01,155 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 10:15:01,155 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 10:15:01,155 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 10:15:01,155 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 10:15:01,156 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 10:15:01,156 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 10:15:01,156 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 10:15:01,157 INFO [RS:1;jenkins-hbase4:38763] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:15:01,157 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38763,1689675300467-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,158 INFO [RS:2;jenkins-hbase4:43961] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:15:01,158 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43961,1689675300665-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,159 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 10:15:01,159 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 10:15:01,159 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 10:15:01,163 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 10:15:01,164 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 10:15:01,168 INFO [RS:0;jenkins-hbase4:35223] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:15:01,168 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35223,1689675300284-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,169 INFO [RS:1;jenkins-hbase4:38763] regionserver.Replication(203): jenkins-hbase4.apache.org,38763,1689675300467 started 2023-07-18 10:15:01,169 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38763,1689675300467, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38763, sessionid=0x10177ed7f730002 2023-07-18 10:15:01,169 DEBUG [RS:1;jenkins-hbase4:38763] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:15:01,169 DEBUG [RS:1;jenkins-hbase4:38763] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,169 DEBUG [RS:1;jenkins-hbase4:38763] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38763,1689675300467' 2023-07-18 10:15:01,169 DEBUG [RS:1;jenkins-hbase4:38763] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:15:01,170 DEBUG [RS:1;jenkins-hbase4:38763] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:15:01,170 DEBUG [RS:1;jenkins-hbase4:38763] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:15:01,170 DEBUG [RS:1;jenkins-hbase4:38763] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:15:01,170 DEBUG [RS:1;jenkins-hbase4:38763] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,170 DEBUG [RS:1;jenkins-hbase4:38763] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38763,1689675300467' 2023-07-18 10:15:01,170 DEBUG [RS:1;jenkins-hbase4:38763] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:15:01,171 DEBUG [RS:1;jenkins-hbase4:38763] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:15:01,171 DEBUG [RS:1;jenkins-hbase4:38763] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:15:01,171 INFO [RS:1;jenkins-hbase4:38763] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 10:15:01,173 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,173 DEBUG [RS:1;jenkins-hbase4:38763] zookeeper.ZKUtil(398): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 10:15:01,174 INFO [RS:1;jenkins-hbase4:38763] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 10:15:01,174 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,174 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,177 INFO [RS:2;jenkins-hbase4:43961] regionserver.Replication(203): jenkins-hbase4.apache.org,43961,1689675300665 started 2023-07-18 10:15:01,177 INFO [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43961,1689675300665, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43961, sessionid=0x10177ed7f730003 2023-07-18 10:15:01,177 DEBUG [RS:2;jenkins-hbase4:43961] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:15:01,177 DEBUG [RS:2;jenkins-hbase4:43961] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:01,177 DEBUG [RS:2;jenkins-hbase4:43961] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43961,1689675300665' 2023-07-18 10:15:01,177 DEBUG [RS:2;jenkins-hbase4:43961] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:15:01,177 DEBUG [RS:2;jenkins-hbase4:43961] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:15:01,178 DEBUG [RS:2;jenkins-hbase4:43961] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:15:01,178 DEBUG [RS:2;jenkins-hbase4:43961] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:15:01,178 DEBUG [RS:2;jenkins-hbase4:43961] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:01,178 DEBUG [RS:2;jenkins-hbase4:43961] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43961,1689675300665' 2023-07-18 10:15:01,178 DEBUG [RS:2;jenkins-hbase4:43961] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:15:01,178 DEBUG [RS:2;jenkins-hbase4:43961] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:15:01,178 DEBUG [RS:2;jenkins-hbase4:43961] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:15:01,178 INFO [RS:2;jenkins-hbase4:43961] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 10:15:01,178 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,179 DEBUG [RS:2;jenkins-hbase4:43961] zookeeper.ZKUtil(398): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 10:15:01,179 INFO [RS:2;jenkins-hbase4:43961] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 10:15:01,179 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,179 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,181 INFO [RS:0;jenkins-hbase4:35223] regionserver.Replication(203): jenkins-hbase4.apache.org,35223,1689675300284 started 2023-07-18 10:15:01,182 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35223,1689675300284, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35223, sessionid=0x10177ed7f730001 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35223,1689675300284' 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35223,1689675300284' 2023-07-18 10:15:01,182 DEBUG [RS:0;jenkins-hbase4:35223] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:15:01,183 DEBUG [RS:0;jenkins-hbase4:35223] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:15:01,183 DEBUG [RS:0;jenkins-hbase4:35223] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:15:01,183 INFO [RS:0;jenkins-hbase4:35223] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 10:15:01,183 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,183 DEBUG [RS:0;jenkins-hbase4:35223] zookeeper.ZKUtil(398): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 10:15:01,183 INFO [RS:0;jenkins-hbase4:35223] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 10:15:01,183 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,183 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,277 INFO [RS:1;jenkins-hbase4:38763] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38763%2C1689675300467, suffix=, logDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,38763,1689675300467, archiveDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/oldWALs, maxLogs=32 2023-07-18 10:15:01,281 INFO [RS:2;jenkins-hbase4:43961] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43961%2C1689675300665, suffix=, logDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,43961,1689675300665, archiveDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/oldWALs, maxLogs=32 2023-07-18 10:15:01,285 INFO [RS:0;jenkins-hbase4:35223] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35223%2C1689675300284, suffix=, logDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,35223,1689675300284, archiveDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/oldWALs, maxLogs=32 2023-07-18 10:15:01,297 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK] 2023-07-18 10:15:01,297 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK] 2023-07-18 10:15:01,297 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK] 2023-07-18 10:15:01,307 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK] 2023-07-18 10:15:01,307 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK] 2023-07-18 10:15:01,307 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK] 2023-07-18 10:15:01,307 INFO [RS:1;jenkins-hbase4:38763] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,38763,1689675300467/jenkins-hbase4.apache.org%2C38763%2C1689675300467.1689675301278 2023-07-18 10:15:01,308 DEBUG [RS:1;jenkins-hbase4:38763] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK], DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK]] 2023-07-18 10:15:01,315 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK] 2023-07-18 10:15:01,315 DEBUG [jenkins-hbase4:42475] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 10:15:01,315 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK] 2023-07-18 10:15:01,315 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK] 2023-07-18 10:15:01,315 DEBUG [jenkins-hbase4:42475] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:15:01,315 DEBUG [jenkins-hbase4:42475] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:15:01,315 DEBUG [jenkins-hbase4:42475] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:15:01,315 DEBUG [jenkins-hbase4:42475] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:15:01,315 DEBUG [jenkins-hbase4:42475] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:15:01,316 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35223,1689675300284, state=OPENING 2023-07-18 10:15:01,317 INFO [RS:2;jenkins-hbase4:43961] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,43961,1689675300665/jenkins-hbase4.apache.org%2C43961%2C1689675300665.1689675301281 2023-07-18 10:15:01,317 DEBUG [RS:2;jenkins-hbase4:43961] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK], DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK]] 2023-07-18 10:15:01,317 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 10:15:01,319 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:01,319 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 10:15:01,323 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35223,1689675300284}] 2023-07-18 10:15:01,324 INFO [RS:0;jenkins-hbase4:35223] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,35223,1689675300284/jenkins-hbase4.apache.org%2C35223%2C1689675300284.1689675301286 2023-07-18 10:15:01,324 DEBUG [RS:0;jenkins-hbase4:35223] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK], DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK], DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK]] 2023-07-18 10:15:01,477 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,478 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:15:01,480 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41884, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:15:01,489 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 10:15:01,489 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:01,491 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35223%2C1689675300284.meta, suffix=.meta, logDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,35223,1689675300284, archiveDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/oldWALs, maxLogs=32 2023-07-18 10:15:01,523 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK] 2023-07-18 10:15:01,524 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK] 2023-07-18 10:15:01,530 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK] 2023-07-18 10:15:01,546 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/WALs/jenkins-hbase4.apache.org,35223,1689675300284/jenkins-hbase4.apache.org%2C35223%2C1689675300284.meta.1689675301492.meta 2023-07-18 10:15:01,546 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39159,DS-a5d8ea2a-b4e4-4a9b-84c4-83d53d553801,DISK], DatanodeInfoWithStorage[127.0.0.1:34075,DS-7e436a9b-6fa1-42f4-a69d-de298812063d,DISK], DatanodeInfoWithStorage[127.0.0.1:40959,DS-e245721d-4073-4544-a10d-bdc7da090b29,DISK]] 2023-07-18 10:15:01,546 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:01,546 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 10:15:01,547 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 10:15:01,547 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 10:15:01,547 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 10:15:01,547 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:01,547 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 10:15:01,547 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 10:15:01,549 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 10:15:01,550 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/info 2023-07-18 10:15:01,550 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/info 2023-07-18 10:15:01,551 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 10:15:01,551 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:01,552 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 10:15:01,553 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:15:01,553 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:15:01,553 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 10:15:01,555 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:01,555 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 10:15:01,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/table 2023-07-18 10:15:01,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/table 2023-07-18 10:15:01,557 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 10:15:01,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:01,560 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740 2023-07-18 10:15:01,561 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740 2023-07-18 10:15:01,564 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 10:15:01,566 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 10:15:01,567 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10826024480, jitterRate=0.0082520991563797}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 10:15:01,567 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 10:15:01,568 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689675301477 2023-07-18 10:15:01,573 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 10:15:01,574 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 10:15:01,574 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35223,1689675300284, state=OPEN 2023-07-18 10:15:01,576 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 10:15:01,576 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 10:15:01,578 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 10:15:01,578 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35223,1689675300284 in 257 msec 2023-07-18 10:15:01,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 10:15:01,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 419 msec 2023-07-18 10:15:01,581 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 578 msec 2023-07-18 10:15:01,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689675301581, completionTime=-1 2023-07-18 10:15:01,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 10:15:01,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 10:15:01,585 DEBUG [hconnection-0x2f691227-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:15:01,586 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41886, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:15:01,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 10:15:01,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689675361588 2023-07-18 10:15:01,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689675421588 2023-07-18 10:15:01,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-18 10:15:01,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42475,1689675300038-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42475,1689675300038-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42475,1689675300038-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42475, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:01,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 10:15:01,595 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:01,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 10:15:01,597 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 10:15:01,598 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:15:01,598 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:15:01,600 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:01,600 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8 empty. 2023-07-18 10:15:01,601 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:01,601 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 10:15:01,616 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42475,1689675300038] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:01,618 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:01,618 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42475,1689675300038] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 10:15:01,620 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:15:01,623 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:15:01,628 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 65b69f3702ea55493c6c9cf2fbc8fdf8, NAME => 'hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp 2023-07-18 10:15:01,639 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:01,649 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e empty. 2023-07-18 10:15:01,650 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:01,650 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 10:15:01,688 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:01,690 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 243736aed6193fb6285dacb3df8cae8e, NAME => 'hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp 2023-07-18 10:15:01,695 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:01,695 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 65b69f3702ea55493c6c9cf2fbc8fdf8, disabling compactions & flushes 2023-07-18 10:15:01,695 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:01,695 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:01,695 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. after waiting 0 ms 2023-07-18 10:15:01,695 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:01,695 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:01,695 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 65b69f3702ea55493c6c9cf2fbc8fdf8: 2023-07-18 10:15:01,697 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:15:01,699 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675301699"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675301699"}]},"ts":"1689675301699"} 2023-07-18 10:15:01,702 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:15:01,703 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:15:01,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675301703"}]},"ts":"1689675301703"} 2023-07-18 10:15:01,705 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:01,705 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 243736aed6193fb6285dacb3df8cae8e, disabling compactions & flushes 2023-07-18 10:15:01,705 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:01,705 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:01,705 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. after waiting 0 ms 2023-07-18 10:15:01,705 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:01,705 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:01,705 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 243736aed6193fb6285dacb3df8cae8e: 2023-07-18 10:15:01,705 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 10:15:01,707 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:15:01,709 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:15:01,709 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675301709"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675301709"}]},"ts":"1689675301709"} 2023-07-18 10:15:01,709 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:15:01,709 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:15:01,709 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:15:01,709 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:15:01,709 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=65b69f3702ea55493c6c9cf2fbc8fdf8, ASSIGN}] 2023-07-18 10:15:01,710 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=65b69f3702ea55493c6c9cf2fbc8fdf8, ASSIGN 2023-07-18 10:15:01,710 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:15:01,712 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:15:01,712 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=65b69f3702ea55493c6c9cf2fbc8fdf8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35223,1689675300284; forceNewPlan=false, retain=false 2023-07-18 10:15:01,712 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675301712"}]},"ts":"1689675301712"} 2023-07-18 10:15:01,713 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 10:15:01,718 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:15:01,718 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:15:01,718 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:15:01,719 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:15:01,719 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:15:01,719 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=243736aed6193fb6285dacb3df8cae8e, ASSIGN}] 2023-07-18 10:15:01,721 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=243736aed6193fb6285dacb3df8cae8e, ASSIGN 2023-07-18 10:15:01,722 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=243736aed6193fb6285dacb3df8cae8e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38763,1689675300467; forceNewPlan=false, retain=false 2023-07-18 10:15:01,722 INFO [jenkins-hbase4:42475] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 10:15:01,724 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=65b69f3702ea55493c6c9cf2fbc8fdf8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,724 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675301724"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675301724"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675301724"}]},"ts":"1689675301724"} 2023-07-18 10:15:01,724 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=243736aed6193fb6285dacb3df8cae8e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,725 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675301724"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675301724"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675301724"}]},"ts":"1689675301724"} 2023-07-18 10:15:01,726 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 65b69f3702ea55493c6c9cf2fbc8fdf8, server=jenkins-hbase4.apache.org,35223,1689675300284}] 2023-07-18 10:15:01,727 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 243736aed6193fb6285dacb3df8cae8e, server=jenkins-hbase4.apache.org,38763,1689675300467}] 2023-07-18 10:15:01,879 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,879 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:15:01,881 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42346, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:15:01,883 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:01,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 65b69f3702ea55493c6c9cf2fbc8fdf8, NAME => 'hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:01,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:01,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:01,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:01,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:01,885 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:01,885 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 243736aed6193fb6285dacb3df8cae8e, NAME => 'hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:01,885 INFO [StoreOpener-65b69f3702ea55493c6c9cf2fbc8fdf8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:01,885 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 10:15:01,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. service=MultiRowMutationService 2023-07-18 10:15:01,886 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 10:15:01,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:01,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:01,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:01,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:01,887 DEBUG [StoreOpener-65b69f3702ea55493c6c9cf2fbc8fdf8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8/info 2023-07-18 10:15:01,887 DEBUG [StoreOpener-65b69f3702ea55493c6c9cf2fbc8fdf8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8/info 2023-07-18 10:15:01,887 INFO [StoreOpener-65b69f3702ea55493c6c9cf2fbc8fdf8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 65b69f3702ea55493c6c9cf2fbc8fdf8 columnFamilyName info 2023-07-18 10:15:01,888 INFO [StoreOpener-243736aed6193fb6285dacb3df8cae8e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:01,888 INFO [StoreOpener-65b69f3702ea55493c6c9cf2fbc8fdf8-1] regionserver.HStore(310): Store=65b69f3702ea55493c6c9cf2fbc8fdf8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:01,889 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:01,889 DEBUG [StoreOpener-243736aed6193fb6285dacb3df8cae8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e/m 2023-07-18 10:15:01,889 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:01,889 DEBUG [StoreOpener-243736aed6193fb6285dacb3df8cae8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e/m 2023-07-18 10:15:01,890 INFO [StoreOpener-243736aed6193fb6285dacb3df8cae8e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 243736aed6193fb6285dacb3df8cae8e columnFamilyName m 2023-07-18 10:15:01,890 INFO [StoreOpener-243736aed6193fb6285dacb3df8cae8e-1] regionserver.HStore(310): Store=243736aed6193fb6285dacb3df8cae8e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:01,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:01,892 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:01,893 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:01,896 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:01,897 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 65b69f3702ea55493c6c9cf2fbc8fdf8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9799287360, jitterRate=-0.08737024664878845}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:15:01,897 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:01,897 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 65b69f3702ea55493c6c9cf2fbc8fdf8: 2023-07-18 10:15:01,898 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8., pid=8, masterSystemTime=1689675301879 2023-07-18 10:15:01,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:01,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:01,901 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:01,901 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 243736aed6193fb6285dacb3df8cae8e; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@67918d5f, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:15:01,901 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=65b69f3702ea55493c6c9cf2fbc8fdf8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:01,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 243736aed6193fb6285dacb3df8cae8e: 2023-07-18 10:15:01,901 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675301901"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675301901"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675301901"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675301901"}]},"ts":"1689675301901"} 2023-07-18 10:15:01,902 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e., pid=9, masterSystemTime=1689675301879 2023-07-18 10:15:01,905 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:01,906 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:01,906 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=243736aed6193fb6285dacb3df8cae8e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:01,906 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675301906"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675301906"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675301906"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675301906"}]},"ts":"1689675301906"} 2023-07-18 10:15:01,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 10:15:01,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 65b69f3702ea55493c6c9cf2fbc8fdf8, server=jenkins-hbase4.apache.org,35223,1689675300284 in 178 msec 2023-07-18 10:15:01,908 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-18 10:15:01,908 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=65b69f3702ea55493c6c9cf2fbc8fdf8, ASSIGN in 197 msec 2023-07-18 10:15:01,909 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 10:15:01,909 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:15:01,909 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 243736aed6193fb6285dacb3df8cae8e, server=jenkins-hbase4.apache.org,38763,1689675300467 in 180 msec 2023-07-18 10:15:01,909 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675301909"}]},"ts":"1689675301909"} 2023-07-18 10:15:01,910 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 10:15:01,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 10:15:01,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=243736aed6193fb6285dacb3df8cae8e, ASSIGN in 190 msec 2023-07-18 10:15:01,911 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:15:01,911 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675301911"}]},"ts":"1689675301911"} 2023-07-18 10:15:01,912 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 10:15:01,913 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:15:01,914 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 318 msec 2023-07-18 10:15:01,915 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:15:01,916 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 298 msec 2023-07-18 10:15:01,922 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42475,1689675300038] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:15:01,924 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42354, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:15:01,926 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 10:15:01,927 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 10:15:01,931 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:01,932 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:01,933 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 10:15:01,934 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42475,1689675300038] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 10:15:01,997 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 10:15:01,999 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:01,999 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:02,003 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 10:15:02,012 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:02,015 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-18 10:15:02,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 10:15:02,032 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:02,035 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-18 10:15:02,050 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 10:15:02,052 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 10:15:02,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.214sec 2023-07-18 10:15:02,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-18 10:15:02,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:02,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-18 10:15:02,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-18 10:15:02,055 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:15:02,056 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:15:02,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-18 10:15:02,057 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,058 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9 empty. 2023-07-18 10:15:02,058 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,059 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-18 10:15:02,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-18 10:15:02,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-18 10:15:02,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:02,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:02,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 10:15:02,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 10:15:02,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42475,1689675300038-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 10:15:02,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42475,1689675300038-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 10:15:02,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 10:15:02,071 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:02,072 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7e4c68191973fb7b87f848fac1cd0bd9, NAME => 'hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp 2023-07-18 10:15:02,081 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:02,081 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 7e4c68191973fb7b87f848fac1cd0bd9, disabling compactions & flushes 2023-07-18 10:15:02,081 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:02,081 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:02,081 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. after waiting 0 ms 2023-07-18 10:15:02,081 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:02,081 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:02,081 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 7e4c68191973fb7b87f848fac1cd0bd9: 2023-07-18 10:15:02,083 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:15:02,084 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689675302084"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675302084"}]},"ts":"1689675302084"} 2023-07-18 10:15:02,085 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:15:02,086 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:15:02,086 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675302086"}]},"ts":"1689675302086"} 2023-07-18 10:15:02,087 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-18 10:15:02,091 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:15:02,091 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:15:02,091 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:15:02,091 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:15:02,091 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:15:02,091 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=7e4c68191973fb7b87f848fac1cd0bd9, ASSIGN}] 2023-07-18 10:15:02,092 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=7e4c68191973fb7b87f848fac1cd0bd9, ASSIGN 2023-07-18 10:15:02,093 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=7e4c68191973fb7b87f848fac1cd0bd9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38763,1689675300467; forceNewPlan=false, retain=false 2023-07-18 10:15:02,127 DEBUG [Listener at localhost/40599] zookeeper.ReadOnlyZKClient(139): Connect 0x2aea8ae8 to 127.0.0.1:59011 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:02,134 DEBUG [Listener at localhost/40599] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@121437bd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:02,135 DEBUG [hconnection-0x64896c5f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:15:02,137 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41892, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:15:02,139 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:02,139 INFO [Listener at localhost/40599] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:02,141 DEBUG [Listener at localhost/40599] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 10:15:02,143 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58588, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 10:15:02,146 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 10:15:02,147 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:02,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 10:15:02,148 DEBUG [Listener at localhost/40599] zookeeper.ReadOnlyZKClient(139): Connect 0x55a3eb8d to 127.0.0.1:59011 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:02,153 DEBUG [Listener at localhost/40599] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a678ef2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:02,153 INFO [Listener at localhost/40599] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59011 2023-07-18 10:15:02,156 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:02,159 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10177ed7f73000a connected 2023-07-18 10:15:02,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-18 10:15:02,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-18 10:15:02,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 10:15:02,173 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:02,176 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 15 msec 2023-07-18 10:15:02,243 INFO [jenkins-hbase4:42475] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:15:02,245 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7e4c68191973fb7b87f848fac1cd0bd9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:02,245 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689675302244"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675302244"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675302244"}]},"ts":"1689675302244"} 2023-07-18 10:15:02,246 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure 7e4c68191973fb7b87f848fac1cd0bd9, server=jenkins-hbase4.apache.org,38763,1689675300467}] 2023-07-18 10:15:02,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 10:15:02,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:02,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-18 10:15:02,275 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:15:02,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-18 10:15:02,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 10:15:02,277 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:02,278 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 10:15:02,279 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:15:02,281 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:02,282 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38 empty. 2023-07-18 10:15:02,282 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:02,282 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 10:15:02,295 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:02,296 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3609dc8bb9f875c1cbe5880471519a38, NAME => 'np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp 2023-07-18 10:15:02,307 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:02,307 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 3609dc8bb9f875c1cbe5880471519a38, disabling compactions & flushes 2023-07-18 10:15:02,307 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:02,307 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:02,307 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. after waiting 0 ms 2023-07-18 10:15:02,307 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:02,307 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:02,307 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 3609dc8bb9f875c1cbe5880471519a38: 2023-07-18 10:15:02,309 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:15:02,310 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675302310"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675302310"}]},"ts":"1689675302310"} 2023-07-18 10:15:02,312 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:15:02,313 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:15:02,313 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675302313"}]},"ts":"1689675302313"} 2023-07-18 10:15:02,314 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-18 10:15:02,322 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:15:02,322 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:15:02,322 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:15:02,322 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:15:02,322 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:15:02,322 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=3609dc8bb9f875c1cbe5880471519a38, ASSIGN}] 2023-07-18 10:15:02,323 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=3609dc8bb9f875c1cbe5880471519a38, ASSIGN 2023-07-18 10:15:02,324 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=3609dc8bb9f875c1cbe5880471519a38, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38763,1689675300467; forceNewPlan=false, retain=false 2023-07-18 10:15:02,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 10:15:02,402 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:02,402 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7e4c68191973fb7b87f848fac1cd0bd9, NAME => 'hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:02,402 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,402 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:02,402 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,402 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,404 INFO [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,405 DEBUG [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9/q 2023-07-18 10:15:02,405 DEBUG [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9/q 2023-07-18 10:15:02,405 INFO [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7e4c68191973fb7b87f848fac1cd0bd9 columnFamilyName q 2023-07-18 10:15:02,406 INFO [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] regionserver.HStore(310): Store=7e4c68191973fb7b87f848fac1cd0bd9/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:02,406 INFO [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,407 DEBUG [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9/u 2023-07-18 10:15:02,407 DEBUG [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9/u 2023-07-18 10:15:02,407 INFO [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7e4c68191973fb7b87f848fac1cd0bd9 columnFamilyName u 2023-07-18 10:15:02,408 INFO [StoreOpener-7e4c68191973fb7b87f848fac1cd0bd9-1] regionserver.HStore(310): Store=7e4c68191973fb7b87f848fac1cd0bd9/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:02,408 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,409 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,410 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-18 10:15:02,411 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:02,413 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:02,413 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7e4c68191973fb7b87f848fac1cd0bd9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11515059680, jitterRate=0.07242350280284882}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-18 10:15:02,413 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7e4c68191973fb7b87f848fac1cd0bd9: 2023-07-18 10:15:02,414 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9., pid=15, masterSystemTime=1689675302398 2023-07-18 10:15:02,415 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:02,415 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:02,416 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7e4c68191973fb7b87f848fac1cd0bd9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:02,416 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689675302416"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675302416"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675302416"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675302416"}]},"ts":"1689675302416"} 2023-07-18 10:15:02,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-18 10:15:02,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure 7e4c68191973fb7b87f848fac1cd0bd9, server=jenkins-hbase4.apache.org,38763,1689675300467 in 171 msec 2023-07-18 10:15:02,420 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 10:15:02,420 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=7e4c68191973fb7b87f848fac1cd0bd9, ASSIGN in 327 msec 2023-07-18 10:15:02,421 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:15:02,421 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675302421"}]},"ts":"1689675302421"} 2023-07-18 10:15:02,422 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-18 10:15:02,425 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:15:02,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 372 msec 2023-07-18 10:15:02,474 INFO [jenkins-hbase4:42475] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:15:02,475 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=3609dc8bb9f875c1cbe5880471519a38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:02,475 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675302475"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675302475"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675302475"}]},"ts":"1689675302475"} 2023-07-18 10:15:02,477 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 3609dc8bb9f875c1cbe5880471519a38, server=jenkins-hbase4.apache.org,38763,1689675300467}] 2023-07-18 10:15:02,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 10:15:02,632 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:02,632 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3609dc8bb9f875c1cbe5880471519a38, NAME => 'np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:02,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:02,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:02,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:02,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:02,634 INFO [StoreOpener-3609dc8bb9f875c1cbe5880471519a38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:02,636 DEBUG [StoreOpener-3609dc8bb9f875c1cbe5880471519a38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38/fam1 2023-07-18 10:15:02,636 DEBUG [StoreOpener-3609dc8bb9f875c1cbe5880471519a38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38/fam1 2023-07-18 10:15:02,637 INFO [StoreOpener-3609dc8bb9f875c1cbe5880471519a38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3609dc8bb9f875c1cbe5880471519a38 columnFamilyName fam1 2023-07-18 10:15:02,637 INFO [StoreOpener-3609dc8bb9f875c1cbe5880471519a38-1] regionserver.HStore(310): Store=3609dc8bb9f875c1cbe5880471519a38/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:02,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:02,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:02,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:02,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:02,644 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3609dc8bb9f875c1cbe5880471519a38; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9687395360, jitterRate=-0.09779100120067596}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:15:02,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3609dc8bb9f875c1cbe5880471519a38: 2023-07-18 10:15:02,644 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38., pid=18, masterSystemTime=1689675302628 2023-07-18 10:15:02,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:02,646 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:02,647 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=3609dc8bb9f875c1cbe5880471519a38, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:02,647 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675302647"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675302647"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675302647"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675302647"}]},"ts":"1689675302647"} 2023-07-18 10:15:02,651 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 10:15:02,651 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 3609dc8bb9f875c1cbe5880471519a38, server=jenkins-hbase4.apache.org,38763,1689675300467 in 172 msec 2023-07-18 10:15:02,653 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-18 10:15:02,653 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=3609dc8bb9f875c1cbe5880471519a38, ASSIGN in 329 msec 2023-07-18 10:15:02,653 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:15:02,653 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675302653"}]},"ts":"1689675302653"} 2023-07-18 10:15:02,665 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-18 10:15:02,669 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:15:02,671 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 397 msec 2023-07-18 10:15:02,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 10:15:02,880 INFO [Listener at localhost/40599] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-18 10:15:02,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:02,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-18 10:15:02,884 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:15:02,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-18 10:15:02,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 10:15:02,903 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=21 msec 2023-07-18 10:15:02,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 10:15:02,990 INFO [Listener at localhost/40599] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-18 10:15:02,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:02,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:02,993 INFO [Listener at localhost/40599] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-18 10:15:02,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-18 10:15:02,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-18 10:15:02,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 10:15:03,000 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675302999"}]},"ts":"1689675302999"} 2023-07-18 10:15:03,001 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-18 10:15:03,003 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-18 10:15:03,003 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=3609dc8bb9f875c1cbe5880471519a38, UNASSIGN}] 2023-07-18 10:15:03,004 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=3609dc8bb9f875c1cbe5880471519a38, UNASSIGN 2023-07-18 10:15:03,005 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=3609dc8bb9f875c1cbe5880471519a38, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:03,005 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675303005"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675303005"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675303005"}]},"ts":"1689675303005"} 2023-07-18 10:15:03,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 3609dc8bb9f875c1cbe5880471519a38, server=jenkins-hbase4.apache.org,38763,1689675300467}] 2023-07-18 10:15:03,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 10:15:03,159 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:03,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3609dc8bb9f875c1cbe5880471519a38, disabling compactions & flushes 2023-07-18 10:15:03,160 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:03,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:03,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. after waiting 0 ms 2023-07-18 10:15:03,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:03,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:15:03,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38. 2023-07-18 10:15:03,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3609dc8bb9f875c1cbe5880471519a38: 2023-07-18 10:15:03,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:03,167 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=3609dc8bb9f875c1cbe5880471519a38, regionState=CLOSED 2023-07-18 10:15:03,168 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675303167"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675303167"}]},"ts":"1689675303167"} 2023-07-18 10:15:03,170 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-18 10:15:03,170 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 3609dc8bb9f875c1cbe5880471519a38, server=jenkins-hbase4.apache.org,38763,1689675300467 in 163 msec 2023-07-18 10:15:03,171 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-18 10:15:03,171 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=3609dc8bb9f875c1cbe5880471519a38, UNASSIGN in 167 msec 2023-07-18 10:15:03,172 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675303172"}]},"ts":"1689675303172"} 2023-07-18 10:15:03,173 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-18 10:15:03,174 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-18 10:15:03,176 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 181 msec 2023-07-18 10:15:03,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 10:15:03,300 INFO [Listener at localhost/40599] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-18 10:15:03,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-18 10:15:03,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-18 10:15:03,303 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 10:15:03,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-18 10:15:03,303 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 10:15:03,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:03,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 10:15:03,307 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:03,308 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38/fam1, FileablePath, hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38/recovered.edits] 2023-07-18 10:15:03,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 10:15:03,314 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38/recovered.edits/4.seqid to hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/archive/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38/recovered.edits/4.seqid 2023-07-18 10:15:03,315 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/.tmp/data/np1/table1/3609dc8bb9f875c1cbe5880471519a38 2023-07-18 10:15:03,315 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 10:15:03,318 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 10:15:03,319 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-18 10:15:03,321 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-18 10:15:03,322 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 10:15:03,322 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-18 10:15:03,322 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675303322"}]},"ts":"9223372036854775807"} 2023-07-18 10:15:03,324 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 10:15:03,324 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3609dc8bb9f875c1cbe5880471519a38, NAME => 'np1:table1,,1689675302271.3609dc8bb9f875c1cbe5880471519a38.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 10:15:03,324 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-18 10:15:03,324 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689675303324"}]},"ts":"9223372036854775807"} 2023-07-18 10:15:03,325 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-18 10:15:03,329 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 10:15:03,330 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 29 msec 2023-07-18 10:15:03,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 10:15:03,411 INFO [Listener at localhost/40599] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-18 10:15:03,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-18 10:15:03,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-18 10:15:03,424 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 10:15:03,427 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 10:15:03,429 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 10:15:03,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 10:15:03,430 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-18 10:15:03,430 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:03,431 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 10:15:03,433 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 10:15:03,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-18 10:15:03,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42475] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 10:15:03,531 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 10:15:03,531 INFO [Listener at localhost/40599] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 10:15:03,531 DEBUG [Listener at localhost/40599] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2aea8ae8 to 127.0.0.1:59011 2023-07-18 10:15:03,531 DEBUG [Listener at localhost/40599] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:03,531 DEBUG [Listener at localhost/40599] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 10:15:03,531 DEBUG [Listener at localhost/40599] util.JVMClusterUtil(257): Found active master hash=10690290, stopped=false 2023-07-18 10:15:03,531 DEBUG [Listener at localhost/40599] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 10:15:03,532 DEBUG [Listener at localhost/40599] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 10:15:03,532 DEBUG [Listener at localhost/40599] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-18 10:15:03,532 INFO [Listener at localhost/40599] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:03,534 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:03,535 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:03,535 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:03,535 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:03,535 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:03,535 INFO [Listener at localhost/40599] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 10:15:03,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:03,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:03,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:03,537 DEBUG [Listener at localhost/40599] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x57d6ad96 to 127.0.0.1:59011 2023-07-18 10:15:03,537 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:03,537 DEBUG [Listener at localhost/40599] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:03,537 INFO [Listener at localhost/40599] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35223,1689675300284' ***** 2023-07-18 10:15:03,537 INFO [Listener at localhost/40599] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:15:03,537 INFO [Listener at localhost/40599] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38763,1689675300467' ***** 2023-07-18 10:15:03,537 INFO [Listener at localhost/40599] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:15:03,537 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:15:03,537 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:15:03,537 INFO [Listener at localhost/40599] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43961,1689675300665' ***** 2023-07-18 10:15:03,538 INFO [Listener at localhost/40599] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:15:03,543 INFO [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:15:03,543 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:15:03,545 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:03,547 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:15:03,549 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:15:03,549 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:03,551 INFO [RS:0;jenkins-hbase4:35223] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@8711218{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:03,551 INFO [RS:0;jenkins-hbase4:35223] server.AbstractConnector(383): Stopped ServerConnector@a8ccbde{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:03,551 INFO [RS:0;jenkins-hbase4:35223] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:15:03,557 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:03,559 INFO [RS:2;jenkins-hbase4:43961] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@309982d1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:03,559 INFO [RS:1;jenkins-hbase4:38763] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@51a3754a{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:03,560 INFO [RS:0;jenkins-hbase4:35223] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39385d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:15:03,560 INFO [RS:2;jenkins-hbase4:43961] server.AbstractConnector(383): Stopped ServerConnector@244c8211{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:03,560 INFO [RS:2;jenkins-hbase4:43961] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:15:03,560 INFO [RS:1;jenkins-hbase4:38763] server.AbstractConnector(383): Stopped ServerConnector@19e45c9f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:03,560 INFO [RS:1;jenkins-hbase4:38763] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:15:03,565 INFO [RS:0;jenkins-hbase4:35223] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2a4ef6d8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir/,STOPPED} 2023-07-18 10:15:03,566 INFO [RS:2;jenkins-hbase4:43961] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@445d8bcc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:15:03,566 INFO [RS:1;jenkins-hbase4:38763] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6c0f5bac{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:15:03,566 INFO [RS:2;jenkins-hbase4:43961] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@24a41ef0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir/,STOPPED} 2023-07-18 10:15:03,566 INFO [RS:1;jenkins-hbase4:38763] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ac126e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir/,STOPPED} 2023-07-18 10:15:03,567 INFO [RS:1;jenkins-hbase4:38763] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:15:03,567 INFO [RS:1;jenkins-hbase4:38763] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:15:03,567 INFO [RS:1;jenkins-hbase4:38763] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:15:03,568 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(3305): Received CLOSE for 7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:03,568 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(3305): Received CLOSE for 243736aed6193fb6285dacb3df8cae8e 2023-07-18 10:15:03,568 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:03,568 DEBUG [RS:1;jenkins-hbase4:38763] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6adf5391 to 127.0.0.1:59011 2023-07-18 10:15:03,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7e4c68191973fb7b87f848fac1cd0bd9, disabling compactions & flushes 2023-07-18 10:15:03,568 DEBUG [RS:1;jenkins-hbase4:38763] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:03,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:03,569 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 10:15:03,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:03,569 DEBUG [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1478): Online Regions={7e4c68191973fb7b87f848fac1cd0bd9=hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9., 243736aed6193fb6285dacb3df8cae8e=hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e.} 2023-07-18 10:15:03,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. after waiting 0 ms 2023-07-18 10:15:03,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:03,569 DEBUG [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1504): Waiting on 243736aed6193fb6285dacb3df8cae8e, 7e4c68191973fb7b87f848fac1cd0bd9 2023-07-18 10:15:03,570 INFO [RS:2;jenkins-hbase4:43961] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:15:03,570 INFO [RS:2;jenkins-hbase4:43961] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:15:03,571 INFO [RS:2;jenkins-hbase4:43961] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:15:03,571 INFO [RS:0;jenkins-hbase4:35223] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:15:03,571 INFO [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:03,571 DEBUG [RS:2;jenkins-hbase4:43961] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x28c7ef89 to 127.0.0.1:59011 2023-07-18 10:15:03,571 DEBUG [RS:2;jenkins-hbase4:43961] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:03,571 INFO [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43961,1689675300665; all regions closed. 2023-07-18 10:15:03,571 DEBUG [RS:2;jenkins-hbase4:43961] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 10:15:03,571 INFO [RS:0;jenkins-hbase4:35223] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:15:03,571 INFO [RS:0;jenkins-hbase4:35223] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:15:03,571 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(3305): Received CLOSE for 65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:03,572 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:03,572 DEBUG [RS:0;jenkins-hbase4:35223] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x108c95bd to 127.0.0.1:59011 2023-07-18 10:15:03,572 DEBUG [RS:0;jenkins-hbase4:35223] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:03,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 65b69f3702ea55493c6c9cf2fbc8fdf8, disabling compactions & flushes 2023-07-18 10:15:03,573 INFO [RS:0;jenkins-hbase4:35223] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:15:03,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:03,573 INFO [RS:0;jenkins-hbase4:35223] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:15:03,573 INFO [RS:0;jenkins-hbase4:35223] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:15:03,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:03,573 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 10:15:03,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. after waiting 0 ms 2023-07-18 10:15:03,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:03,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 65b69f3702ea55493c6c9cf2fbc8fdf8 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-18 10:15:03,575 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 10:15:03,575 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 65b69f3702ea55493c6c9cf2fbc8fdf8=hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8.} 2023-07-18 10:15:03,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 10:15:03,577 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1504): Waiting on 1588230740, 65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:03,577 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 10:15:03,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 10:15:03,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 10:15:03,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 10:15:03,577 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-18 10:15:03,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/quota/7e4c68191973fb7b87f848fac1cd0bd9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:15:03,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:03,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7e4c68191973fb7b87f848fac1cd0bd9: 2023-07-18 10:15:03,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689675302052.7e4c68191973fb7b87f848fac1cd0bd9. 2023-07-18 10:15:03,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 243736aed6193fb6285dacb3df8cae8e, disabling compactions & flushes 2023-07-18 10:15:03,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:03,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:03,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. after waiting 0 ms 2023-07-18 10:15:03,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:03,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 243736aed6193fb6285dacb3df8cae8e 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-18 10:15:03,595 DEBUG [RS:2;jenkins-hbase4:43961] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/oldWALs 2023-07-18 10:15:03,595 INFO [RS:2;jenkins-hbase4:43961] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43961%2C1689675300665:(num 1689675301281) 2023-07-18 10:15:03,595 DEBUG [RS:2;jenkins-hbase4:43961] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:03,595 INFO [RS:2;jenkins-hbase4:43961] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:03,596 INFO [RS:2;jenkins-hbase4:43961] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 10:15:03,596 INFO [RS:2;jenkins-hbase4:43961] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:15:03,596 INFO [RS:2;jenkins-hbase4:43961] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:15:03,596 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:15:03,596 INFO [RS:2;jenkins-hbase4:43961] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:15:03,597 INFO [RS:2;jenkins-hbase4:43961] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43961 2023-07-18 10:15:03,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e/.tmp/m/3e21c11b4c90498f92bb8e54fe617495 2023-07-18 10:15:03,612 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/.tmp/info/df71574d60a1428aa8735abceca3059e 2023-07-18 10:15:03,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e/.tmp/m/3e21c11b4c90498f92bb8e54fe617495 as hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e/m/3e21c11b4c90498f92bb8e54fe617495 2023-07-18 10:15:03,619 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for df71574d60a1428aa8735abceca3059e 2023-07-18 10:15:03,626 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e/m/3e21c11b4c90498f92bb8e54fe617495, entries=1, sequenceid=7, filesize=4.9 K 2023-07-18 10:15:03,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 243736aed6193fb6285dacb3df8cae8e in 47ms, sequenceid=7, compaction requested=false 2023-07-18 10:15:03,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 10:15:03,637 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/.tmp/rep_barrier/037264d0bd6f4bcd9ba1e72c91871257 2023-07-18 10:15:03,643 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/rsgroup/243736aed6193fb6285dacb3df8cae8e/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-18 10:15:03,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:15:03,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:03,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 243736aed6193fb6285dacb3df8cae8e: 2023-07-18 10:15:03,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689675301616.243736aed6193fb6285dacb3df8cae8e. 2023-07-18 10:15:03,645 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 037264d0bd6f4bcd9ba1e72c91871257 2023-07-18 10:15:03,662 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/.tmp/table/123e9815cbd649d29df495987233f49d 2023-07-18 10:15:03,669 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 123e9815cbd649d29df495987233f49d 2023-07-18 10:15:03,670 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/.tmp/info/df71574d60a1428aa8735abceca3059e as hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/info/df71574d60a1428aa8735abceca3059e 2023-07-18 10:15:03,676 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:03,676 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:03,676 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:03,676 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:03,676 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:03,676 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43961,1689675300665 2023-07-18 10:15:03,676 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:03,677 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43961,1689675300665] 2023-07-18 10:15:03,677 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43961,1689675300665; numProcessing=1 2023-07-18 10:15:03,682 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for df71574d60a1428aa8735abceca3059e 2023-07-18 10:15:03,682 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/info/df71574d60a1428aa8735abceca3059e, entries=32, sequenceid=31, filesize=8.5 K 2023-07-18 10:15:03,683 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/.tmp/rep_barrier/037264d0bd6f4bcd9ba1e72c91871257 as hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/rep_barrier/037264d0bd6f4bcd9ba1e72c91871257 2023-07-18 10:15:03,694 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 037264d0bd6f4bcd9ba1e72c91871257 2023-07-18 10:15:03,694 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/rep_barrier/037264d0bd6f4bcd9ba1e72c91871257, entries=1, sequenceid=31, filesize=4.9 K 2023-07-18 10:15:03,695 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/.tmp/table/123e9815cbd649d29df495987233f49d as hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/table/123e9815cbd649d29df495987233f49d 2023-07-18 10:15:03,701 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 123e9815cbd649d29df495987233f49d 2023-07-18 10:15:03,701 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/table/123e9815cbd649d29df495987233f49d, entries=8, sequenceid=31, filesize=5.2 K 2023-07-18 10:15:03,702 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 125ms, sequenceid=31, compaction requested=false 2023-07-18 10:15:03,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 10:15:03,719 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-18 10:15:03,720 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:15:03,720 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 10:15:03,721 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 10:15:03,721 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 10:15:03,769 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38763,1689675300467; all regions closed. 2023-07-18 10:15:03,769 DEBUG [RS:1;jenkins-hbase4:38763] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 10:15:03,775 DEBUG [RS:1;jenkins-hbase4:38763] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/oldWALs 2023-07-18 10:15:03,776 INFO [RS:1;jenkins-hbase4:38763] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38763%2C1689675300467:(num 1689675301278) 2023-07-18 10:15:03,776 DEBUG [RS:1;jenkins-hbase4:38763] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:03,776 INFO [RS:1;jenkins-hbase4:38763] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:03,776 INFO [RS:1;jenkins-hbase4:38763] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 10:15:03,776 INFO [RS:1;jenkins-hbase4:38763] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:15:03,776 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:15:03,776 INFO [RS:1;jenkins-hbase4:38763] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:15:03,776 INFO [RS:1;jenkins-hbase4:38763] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:15:03,777 INFO [RS:1;jenkins-hbase4:38763] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38763 2023-07-18 10:15:03,778 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1504): Waiting on 65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:03,781 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43961,1689675300665 already deleted, retry=false 2023-07-18 10:15:03,781 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43961,1689675300665 expired; onlineServers=2 2023-07-18 10:15:03,781 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:03,781 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38763,1689675300467 2023-07-18 10:15:03,781 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:03,783 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38763,1689675300467] 2023-07-18 10:15:03,783 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38763,1689675300467; numProcessing=2 2023-07-18 10:15:03,785 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38763,1689675300467 already deleted, retry=false 2023-07-18 10:15:03,785 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38763,1689675300467 expired; onlineServers=1 2023-07-18 10:15:03,834 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:03,834 INFO [RS:2;jenkins-hbase4:43961] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43961,1689675300665; zookeeper connection closed. 2023-07-18 10:15:03,834 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:43961-0x10177ed7f730003, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:03,836 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@fbe300d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@fbe300d 2023-07-18 10:15:03,978 DEBUG [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1504): Waiting on 65b69f3702ea55493c6c9cf2fbc8fdf8 2023-07-18 10:15:04,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8/.tmp/info/6070829499104be6952d4d5fb62bb7b7 2023-07-18 10:15:04,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6070829499104be6952d4d5fb62bb7b7 2023-07-18 10:15:04,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8/.tmp/info/6070829499104be6952d4d5fb62bb7b7 as hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8/info/6070829499104be6952d4d5fb62bb7b7 2023-07-18 10:15:04,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6070829499104be6952d4d5fb62bb7b7 2023-07-18 10:15:04,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8/info/6070829499104be6952d4d5fb62bb7b7, entries=3, sequenceid=8, filesize=5.0 K 2023-07-18 10:15:04,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 65b69f3702ea55493c6c9cf2fbc8fdf8 in 447ms, sequenceid=8, compaction requested=false 2023-07-18 10:15:04,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 10:15:04,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/data/hbase/namespace/65b69f3702ea55493c6c9cf2fbc8fdf8/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-18 10:15:04,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:04,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 65b69f3702ea55493c6c9cf2fbc8fdf8: 2023-07-18 10:15:04,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689675301595.65b69f3702ea55493c6c9cf2fbc8fdf8. 2023-07-18 10:15:04,035 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:04,035 INFO [RS:1;jenkins-hbase4:38763] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38763,1689675300467; zookeeper connection closed. 2023-07-18 10:15:04,035 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:38763-0x10177ed7f730002, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:04,037 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@372d4672] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@372d4672 2023-07-18 10:15:04,155 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-18 10:15:04,155 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-18 10:15:04,178 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35223,1689675300284; all regions closed. 2023-07-18 10:15:04,178 DEBUG [RS:0;jenkins-hbase4:35223] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 10:15:04,184 DEBUG [RS:0;jenkins-hbase4:35223] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/oldWALs 2023-07-18 10:15:04,184 INFO [RS:0;jenkins-hbase4:35223] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35223%2C1689675300284.meta:.meta(num 1689675301492) 2023-07-18 10:15:04,190 DEBUG [RS:0;jenkins-hbase4:35223] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/oldWALs 2023-07-18 10:15:04,190 INFO [RS:0;jenkins-hbase4:35223] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35223%2C1689675300284:(num 1689675301286) 2023-07-18 10:15:04,190 DEBUG [RS:0;jenkins-hbase4:35223] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:04,190 INFO [RS:0;jenkins-hbase4:35223] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:04,190 INFO [RS:0;jenkins-hbase4:35223] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 10:15:04,190 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:15:04,191 INFO [RS:0;jenkins-hbase4:35223] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35223 2023-07-18 10:15:04,196 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35223,1689675300284 2023-07-18 10:15:04,196 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:04,198 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35223,1689675300284] 2023-07-18 10:15:04,198 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35223,1689675300284; numProcessing=3 2023-07-18 10:15:04,199 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35223,1689675300284 already deleted, retry=false 2023-07-18 10:15:04,199 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35223,1689675300284 expired; onlineServers=0 2023-07-18 10:15:04,199 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42475,1689675300038' ***** 2023-07-18 10:15:04,199 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 10:15:04,200 DEBUG [M:0;jenkins-hbase4:42475] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49c8a3e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:15:04,200 INFO [M:0;jenkins-hbase4:42475] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:15:04,202 INFO [M:0;jenkins-hbase4:42475] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@220815cc{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 10:15:04,202 INFO [M:0;jenkins-hbase4:42475] server.AbstractConnector(383): Stopped ServerConnector@44e94428{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:04,202 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:04,202 INFO [M:0;jenkins-hbase4:42475] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:15:04,202 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:04,203 INFO [M:0;jenkins-hbase4:42475] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@611aa7c8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:15:04,203 INFO [M:0;jenkins-hbase4:42475] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4da0099e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir/,STOPPED} 2023-07-18 10:15:04,203 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:04,203 INFO [M:0;jenkins-hbase4:42475] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42475,1689675300038 2023-07-18 10:15:04,203 INFO [M:0;jenkins-hbase4:42475] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42475,1689675300038; all regions closed. 2023-07-18 10:15:04,203 DEBUG [M:0;jenkins-hbase4:42475] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:04,203 INFO [M:0;jenkins-hbase4:42475] master.HMaster(1491): Stopping master jetty server 2023-07-18 10:15:04,204 INFO [M:0;jenkins-hbase4:42475] server.AbstractConnector(383): Stopped ServerConnector@28fd2f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:04,204 DEBUG [M:0;jenkins-hbase4:42475] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 10:15:04,204 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 10:15:04,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675301021] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675301021,5,FailOnTimeoutGroup] 2023-07-18 10:15:04,204 DEBUG [M:0;jenkins-hbase4:42475] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 10:15:04,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675301021] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675301021,5,FailOnTimeoutGroup] 2023-07-18 10:15:04,205 INFO [M:0;jenkins-hbase4:42475] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 10:15:04,206 INFO [M:0;jenkins-hbase4:42475] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 10:15:04,206 INFO [M:0;jenkins-hbase4:42475] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 10:15:04,206 DEBUG [M:0;jenkins-hbase4:42475] master.HMaster(1512): Stopping service threads 2023-07-18 10:15:04,206 INFO [M:0;jenkins-hbase4:42475] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 10:15:04,207 ERROR [M:0;jenkins-hbase4:42475] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 10:15:04,207 INFO [M:0;jenkins-hbase4:42475] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 10:15:04,207 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 10:15:04,208 DEBUG [M:0;jenkins-hbase4:42475] zookeeper.ZKUtil(398): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 10:15:04,208 WARN [M:0;jenkins-hbase4:42475] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 10:15:04,208 INFO [M:0;jenkins-hbase4:42475] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 10:15:04,209 INFO [M:0;jenkins-hbase4:42475] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 10:15:04,209 DEBUG [M:0;jenkins-hbase4:42475] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 10:15:04,209 INFO [M:0;jenkins-hbase4:42475] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:04,209 DEBUG [M:0;jenkins-hbase4:42475] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:04,209 DEBUG [M:0;jenkins-hbase4:42475] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 10:15:04,209 DEBUG [M:0;jenkins-hbase4:42475] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:04,209 INFO [M:0;jenkins-hbase4:42475] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.10 KB 2023-07-18 10:15:04,226 INFO [M:0;jenkins-hbase4:42475] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b2a8d3b83c014b23848a1d5bc33f8668 2023-07-18 10:15:04,231 DEBUG [M:0;jenkins-hbase4:42475] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b2a8d3b83c014b23848a1d5bc33f8668 as hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b2a8d3b83c014b23848a1d5bc33f8668 2023-07-18 10:15:04,236 INFO [M:0;jenkins-hbase4:42475] regionserver.HStore(1080): Added hdfs://localhost:43981/user/jenkins/test-data/b5d51aab-36c8-0569-9035-6ddc998c8bc6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b2a8d3b83c014b23848a1d5bc33f8668, entries=24, sequenceid=194, filesize=12.4 K 2023-07-18 10:15:04,237 INFO [M:0;jenkins-hbase4:42475] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95179, heapSize ~109.09 KB/111704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=194, compaction requested=false 2023-07-18 10:15:04,239 INFO [M:0;jenkins-hbase4:42475] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:04,239 DEBUG [M:0;jenkins-hbase4:42475] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 10:15:04,243 INFO [M:0;jenkins-hbase4:42475] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 10:15:04,243 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:15:04,244 INFO [M:0;jenkins-hbase4:42475] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42475 2023-07-18 10:15:04,246 DEBUG [M:0;jenkins-hbase4:42475] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42475,1689675300038 already deleted, retry=false 2023-07-18 10:15:04,298 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:04,298 INFO [RS:0;jenkins-hbase4:35223] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35223,1689675300284; zookeeper connection closed. 2023-07-18 10:15:04,298 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): regionserver:35223-0x10177ed7f730001, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:04,299 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5d827b24] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5d827b24 2023-07-18 10:15:04,299 INFO [Listener at localhost/40599] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-18 10:15:04,398 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:04,398 INFO [M:0;jenkins-hbase4:42475] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42475,1689675300038; zookeeper connection closed. 2023-07-18 10:15:04,398 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): master:42475-0x10177ed7f730000, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:04,399 WARN [Listener at localhost/40599] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 10:15:04,403 INFO [Listener at localhost/40599] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:15:04,510 WARN [BP-146259941-172.31.14.131-1689675298830 heartbeating to localhost/127.0.0.1:43981] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 10:15:04,511 WARN [BP-146259941-172.31.14.131-1689675298830 heartbeating to localhost/127.0.0.1:43981] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-146259941-172.31.14.131-1689675298830 (Datanode Uuid 44ec7168-64c5-412f-bfaf-71023fcf60af) service to localhost/127.0.0.1:43981 2023-07-18 10:15:04,511 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625/dfs/data/data5/current/BP-146259941-172.31.14.131-1689675298830] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:04,512 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625/dfs/data/data6/current/BP-146259941-172.31.14.131-1689675298830] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:04,514 WARN [Listener at localhost/40599] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 10:15:04,525 INFO [Listener at localhost/40599] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:15:04,630 WARN [BP-146259941-172.31.14.131-1689675298830 heartbeating to localhost/127.0.0.1:43981] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 10:15:04,630 WARN [BP-146259941-172.31.14.131-1689675298830 heartbeating to localhost/127.0.0.1:43981] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-146259941-172.31.14.131-1689675298830 (Datanode Uuid 25ad4980-a3f4-44a3-b19a-c63dae902ce5) service to localhost/127.0.0.1:43981 2023-07-18 10:15:04,631 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625/dfs/data/data3/current/BP-146259941-172.31.14.131-1689675298830] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:04,632 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625/dfs/data/data4/current/BP-146259941-172.31.14.131-1689675298830] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:04,633 WARN [Listener at localhost/40599] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 10:15:04,640 INFO [Listener at localhost/40599] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:15:04,744 WARN [BP-146259941-172.31.14.131-1689675298830 heartbeating to localhost/127.0.0.1:43981] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 10:15:04,745 WARN [BP-146259941-172.31.14.131-1689675298830 heartbeating to localhost/127.0.0.1:43981] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-146259941-172.31.14.131-1689675298830 (Datanode Uuid 36b2f150-5656-4ef1-b3e5-1693ce8dc9f2) service to localhost/127.0.0.1:43981 2023-07-18 10:15:04,745 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625/dfs/data/data1/current/BP-146259941-172.31.14.131-1689675298830] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:04,746 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/cluster_048d916c-1efd-119c-1721-9d1603941625/dfs/data/data2/current/BP-146259941-172.31.14.131-1689675298830] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:04,757 INFO [Listener at localhost/40599] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:15:04,880 INFO [Listener at localhost/40599] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 10:15:04,920 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 10:15:04,920 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 10:15:04,920 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.log.dir so I do NOT create it in target/test-data/d19a173b-073f-b888-bb58-de35142bed71 2023-07-18 10:15:04,920 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5de2595-9109-dc4f-b862-1d01f6c0330c/hadoop.tmp.dir so I do NOT create it in target/test-data/d19a173b-073f-b888-bb58-de35142bed71 2023-07-18 10:15:04,920 INFO [Listener at localhost/40599] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca, deleteOnExit=true 2023-07-18 10:15:04,920 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 10:15:04,920 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/test.cache.data in system properties and HBase conf 2023-07-18 10:15:04,920 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 10:15:04,920 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir in system properties and HBase conf 2023-07-18 10:15:04,921 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 10:15:04,921 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 10:15:04,921 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 10:15:04,921 DEBUG [Listener at localhost/40599] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 10:15:04,921 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 10:15:04,921 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 10:15:04,922 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 10:15:04,922 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 10:15:04,922 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 10:15:04,922 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 10:15:04,922 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 10:15:04,922 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 10:15:04,922 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 10:15:04,922 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/nfs.dump.dir in system properties and HBase conf 2023-07-18 10:15:04,923 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir in system properties and HBase conf 2023-07-18 10:15:04,923 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 10:15:04,923 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 10:15:04,923 INFO [Listener at localhost/40599] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 10:15:04,928 WARN [Listener at localhost/40599] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 10:15:04,928 WARN [Listener at localhost/40599] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 10:15:04,970 WARN [Listener at localhost/40599] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:15:04,973 INFO [Listener at localhost/40599] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:15:04,978 INFO [Listener at localhost/40599] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir/Jetty_localhost_41615_hdfs____.7g5cp/webapp 2023-07-18 10:15:04,979 DEBUG [Listener at localhost/40599-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10177ed7f73000a, quorum=127.0.0.1:59011, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 10:15:04,979 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10177ed7f73000a, quorum=127.0.0.1:59011, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 10:15:05,074 INFO [Listener at localhost/40599] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41615 2023-07-18 10:15:05,079 WARN [Listener at localhost/40599] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 10:15:05,079 WARN [Listener at localhost/40599] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 10:15:05,173 WARN [Listener at localhost/39145] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:15:05,192 WARN [Listener at localhost/39145] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 10:15:05,194 WARN [Listener at localhost/39145] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:15:05,196 INFO [Listener at localhost/39145] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:15:05,201 INFO [Listener at localhost/39145] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir/Jetty_localhost_34363_datanode____.wxw8dm/webapp 2023-07-18 10:15:05,295 INFO [Listener at localhost/39145] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34363 2023-07-18 10:15:05,302 WARN [Listener at localhost/34513] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:15:05,312 WARN [Listener at localhost/34513] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 10:15:05,314 WARN [Listener at localhost/34513] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:15:05,315 INFO [Listener at localhost/34513] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:15:05,319 INFO [Listener at localhost/34513] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir/Jetty_localhost_37539_datanode____.cfrw9u/webapp 2023-07-18 10:15:05,411 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x178cd7825ebc71b8: Processing first storage report for DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0 from datanode b836b7b3-727c-4af5-8105-0d364ba55840 2023-07-18 10:15:05,412 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x178cd7825ebc71b8: from storage DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0 node DatanodeRegistration(127.0.0.1:40947, datanodeUuid=b836b7b3-727c-4af5-8105-0d364ba55840, infoPort=42467, infoSecurePort=0, ipcPort=34513, storageInfo=lv=-57;cid=testClusterID;nsid=73300940;c=1689675304930), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 10:15:05,412 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x178cd7825ebc71b8: Processing first storage report for DS-e9726415-590f-4dcc-8ae9-7ccbe7932f37 from datanode b836b7b3-727c-4af5-8105-0d364ba55840 2023-07-18 10:15:05,412 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x178cd7825ebc71b8: from storage DS-e9726415-590f-4dcc-8ae9-7ccbe7932f37 node DatanodeRegistration(127.0.0.1:40947, datanodeUuid=b836b7b3-727c-4af5-8105-0d364ba55840, infoPort=42467, infoSecurePort=0, ipcPort=34513, storageInfo=lv=-57;cid=testClusterID;nsid=73300940;c=1689675304930), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:15:05,427 INFO [Listener at localhost/34513] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37539 2023-07-18 10:15:05,437 WARN [Listener at localhost/35685] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:15:05,460 WARN [Listener at localhost/35685] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 10:15:05,463 WARN [Listener at localhost/35685] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 10:15:05,464 INFO [Listener at localhost/35685] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 10:15:05,468 INFO [Listener at localhost/35685] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir/Jetty_localhost_44501_datanode____s56ijh/webapp 2023-07-18 10:15:05,565 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa3aa2a636573be9f: Processing first storage report for DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea from datanode 889dd26b-f065-474d-9af0-febdf961a555 2023-07-18 10:15:05,565 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa3aa2a636573be9f: from storage DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea node DatanodeRegistration(127.0.0.1:37329, datanodeUuid=889dd26b-f065-474d-9af0-febdf961a555, infoPort=40875, infoSecurePort=0, ipcPort=35685, storageInfo=lv=-57;cid=testClusterID;nsid=73300940;c=1689675304930), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 10:15:05,566 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa3aa2a636573be9f: Processing first storage report for DS-12a3bd92-d694-4371-8ec6-d24a9e508613 from datanode 889dd26b-f065-474d-9af0-febdf961a555 2023-07-18 10:15:05,566 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa3aa2a636573be9f: from storage DS-12a3bd92-d694-4371-8ec6-d24a9e508613 node DatanodeRegistration(127.0.0.1:37329, datanodeUuid=889dd26b-f065-474d-9af0-febdf961a555, infoPort=40875, infoSecurePort=0, ipcPort=35685, storageInfo=lv=-57;cid=testClusterID;nsid=73300940;c=1689675304930), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:15:05,579 INFO [Listener at localhost/35685] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44501 2023-07-18 10:15:05,614 WARN [Listener at localhost/44679] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 10:15:05,729 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6cbfab9e129b6270: Processing first storage report for DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1 from datanode 5ef0d31e-9a72-4f2c-9b55-6a38121c0be8 2023-07-18 10:15:05,729 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6cbfab9e129b6270: from storage DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1 node DatanodeRegistration(127.0.0.1:36533, datanodeUuid=5ef0d31e-9a72-4f2c-9b55-6a38121c0be8, infoPort=39921, infoSecurePort=0, ipcPort=44679, storageInfo=lv=-57;cid=testClusterID;nsid=73300940;c=1689675304930), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:15:05,729 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6cbfab9e129b6270: Processing first storage report for DS-ff8a9119-b522-48c2-86ba-c94e2d1854cf from datanode 5ef0d31e-9a72-4f2c-9b55-6a38121c0be8 2023-07-18 10:15:05,729 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6cbfab9e129b6270: from storage DS-ff8a9119-b522-48c2-86ba-c94e2d1854cf node DatanodeRegistration(127.0.0.1:36533, datanodeUuid=5ef0d31e-9a72-4f2c-9b55-6a38121c0be8, infoPort=39921, infoSecurePort=0, ipcPort=44679, storageInfo=lv=-57;cid=testClusterID;nsid=73300940;c=1689675304930), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 10:15:05,738 DEBUG [Listener at localhost/44679] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71 2023-07-18 10:15:05,747 INFO [Listener at localhost/44679] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/zookeeper_0, clientPort=56417, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 10:15:05,748 INFO [Listener at localhost/44679] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56417 2023-07-18 10:15:05,748 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:05,749 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:05,765 INFO [Listener at localhost/44679] util.FSUtils(471): Created version file at hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792 with version=8 2023-07-18 10:15:05,765 INFO [Listener at localhost/44679] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:38869/user/jenkins/test-data/e735d4e4-4fa2-abe1-f0cd-27b59b169796/hbase-staging 2023-07-18 10:15:05,766 DEBUG [Listener at localhost/44679] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 10:15:05,766 DEBUG [Listener at localhost/44679] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 10:15:05,766 DEBUG [Listener at localhost/44679] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 10:15:05,766 DEBUG [Listener at localhost/44679] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 10:15:05,767 INFO [Listener at localhost/44679] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:15:05,767 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:05,767 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:05,767 INFO [Listener at localhost/44679] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:15:05,767 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:05,767 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:15:05,767 INFO [Listener at localhost/44679] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:15:05,768 INFO [Listener at localhost/44679] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46153 2023-07-18 10:15:05,768 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:05,769 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:05,770 INFO [Listener at localhost/44679] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46153 connecting to ZooKeeper ensemble=127.0.0.1:56417 2023-07-18 10:15:05,779 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:461530x0, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:05,780 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46153-0x10177ed96110000 connected 2023-07-18 10:15:05,794 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:05,794 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:05,795 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:15:05,795 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46153 2023-07-18 10:15:05,795 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46153 2023-07-18 10:15:05,795 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46153 2023-07-18 10:15:05,796 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46153 2023-07-18 10:15:05,796 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46153 2023-07-18 10:15:05,798 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:15:05,798 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:15:05,798 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:15:05,798 INFO [Listener at localhost/44679] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 10:15:05,798 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:15:05,798 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:15:05,799 INFO [Listener at localhost/44679] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:15:05,799 INFO [Listener at localhost/44679] http.HttpServer(1146): Jetty bound to port 40415 2023-07-18 10:15:05,799 INFO [Listener at localhost/44679] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:05,800 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:05,800 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d867386{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:15:05,801 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:05,801 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@936f509{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:15:05,932 INFO [Listener at localhost/44679] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:15:05,933 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:15:05,933 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:15:05,934 INFO [Listener at localhost/44679] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 10:15:05,935 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:05,936 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1d66a142{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir/jetty-0_0_0_0-40415-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8128705834659733055/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 10:15:05,938 INFO [Listener at localhost/44679] server.AbstractConnector(333): Started ServerConnector@700c4bda{HTTP/1.1, (http/1.1)}{0.0.0.0:40415} 2023-07-18 10:15:05,938 INFO [Listener at localhost/44679] server.Server(415): Started @41650ms 2023-07-18 10:15:05,938 INFO [Listener at localhost/44679] master.HMaster(444): hbase.rootdir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792, hbase.cluster.distributed=false 2023-07-18 10:15:05,953 INFO [Listener at localhost/44679] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:15:05,953 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:05,953 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:05,954 INFO [Listener at localhost/44679] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:15:05,954 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:05,954 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:15:05,954 INFO [Listener at localhost/44679] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:15:05,954 INFO [Listener at localhost/44679] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37027 2023-07-18 10:15:05,955 INFO [Listener at localhost/44679] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:15:05,957 DEBUG [Listener at localhost/44679] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:15:05,957 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:05,958 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:05,959 INFO [Listener at localhost/44679] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37027 connecting to ZooKeeper ensemble=127.0.0.1:56417 2023-07-18 10:15:05,966 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:370270x0, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:05,967 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:370270x0, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:05,968 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37027-0x10177ed96110001 connected 2023-07-18 10:15:05,969 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:05,969 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:15:05,971 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37027 2023-07-18 10:15:05,971 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37027 2023-07-18 10:15:05,975 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37027 2023-07-18 10:15:05,975 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37027 2023-07-18 10:15:05,978 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37027 2023-07-18 10:15:05,980 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:15:05,981 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:15:05,981 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:15:05,981 INFO [Listener at localhost/44679] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:15:05,981 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:15:05,981 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:15:05,981 INFO [Listener at localhost/44679] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:15:05,982 INFO [Listener at localhost/44679] http.HttpServer(1146): Jetty bound to port 43395 2023-07-18 10:15:05,982 INFO [Listener at localhost/44679] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:05,983 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:05,983 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3a52efb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:15:05,984 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:05,984 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b1b3fe0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:15:06,096 INFO [Listener at localhost/44679] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:15:06,097 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:15:06,097 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:15:06,097 INFO [Listener at localhost/44679] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 10:15:06,098 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:06,099 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@58e40637{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir/jetty-0_0_0_0-43395-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2601212363271901883/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:06,100 INFO [Listener at localhost/44679] server.AbstractConnector(333): Started ServerConnector@22526232{HTTP/1.1, (http/1.1)}{0.0.0.0:43395} 2023-07-18 10:15:06,100 INFO [Listener at localhost/44679] server.Server(415): Started @41812ms 2023-07-18 10:15:06,111 INFO [Listener at localhost/44679] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:15:06,111 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:06,111 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:06,112 INFO [Listener at localhost/44679] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:15:06,112 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:06,112 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:15:06,112 INFO [Listener at localhost/44679] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:15:06,112 INFO [Listener at localhost/44679] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35165 2023-07-18 10:15:06,113 INFO [Listener at localhost/44679] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:15:06,114 DEBUG [Listener at localhost/44679] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:15:06,114 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:06,115 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:06,116 INFO [Listener at localhost/44679] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35165 connecting to ZooKeeper ensemble=127.0.0.1:56417 2023-07-18 10:15:06,119 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:351650x0, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:06,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35165-0x10177ed96110002 connected 2023-07-18 10:15:06,121 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:06,121 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:06,122 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:15:06,122 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35165 2023-07-18 10:15:06,122 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35165 2023-07-18 10:15:06,126 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35165 2023-07-18 10:15:06,126 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35165 2023-07-18 10:15:06,127 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35165 2023-07-18 10:15:06,129 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:15:06,129 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:15:06,129 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:15:06,130 INFO [Listener at localhost/44679] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:15:06,130 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:15:06,130 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:15:06,130 INFO [Listener at localhost/44679] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:15:06,130 INFO [Listener at localhost/44679] http.HttpServer(1146): Jetty bound to port 33241 2023-07-18 10:15:06,130 INFO [Listener at localhost/44679] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:06,133 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:06,133 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5645df0e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:15:06,133 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:06,134 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@522757e5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:15:06,264 INFO [Listener at localhost/44679] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:15:06,265 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:15:06,265 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:15:06,266 INFO [Listener at localhost/44679] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 10:15:06,267 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:06,268 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4bc8a2d7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir/jetty-0_0_0_0-33241-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7759787999843204645/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:06,271 INFO [Listener at localhost/44679] server.AbstractConnector(333): Started ServerConnector@166c2be4{HTTP/1.1, (http/1.1)}{0.0.0.0:33241} 2023-07-18 10:15:06,271 INFO [Listener at localhost/44679] server.Server(415): Started @41982ms 2023-07-18 10:15:06,287 INFO [Listener at localhost/44679] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:15:06,288 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:06,288 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:06,288 INFO [Listener at localhost/44679] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:15:06,288 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:06,288 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:15:06,288 INFO [Listener at localhost/44679] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:15:06,289 INFO [Listener at localhost/44679] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40717 2023-07-18 10:15:06,289 INFO [Listener at localhost/44679] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:15:06,290 DEBUG [Listener at localhost/44679] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:15:06,291 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:06,292 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:06,293 INFO [Listener at localhost/44679] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40717 connecting to ZooKeeper ensemble=127.0.0.1:56417 2023-07-18 10:15:06,296 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:407170x0, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:06,298 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:407170x0, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:06,298 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40717-0x10177ed96110003 connected 2023-07-18 10:15:06,298 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:06,299 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:15:06,299 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40717 2023-07-18 10:15:06,300 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40717 2023-07-18 10:15:06,300 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40717 2023-07-18 10:15:06,304 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40717 2023-07-18 10:15:06,304 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40717 2023-07-18 10:15:06,306 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:15:06,306 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:15:06,306 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:15:06,307 INFO [Listener at localhost/44679] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:15:06,307 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:15:06,307 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:15:06,308 INFO [Listener at localhost/44679] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:15:06,308 INFO [Listener at localhost/44679] http.HttpServer(1146): Jetty bound to port 37757 2023-07-18 10:15:06,308 INFO [Listener at localhost/44679] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:06,314 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:06,315 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@36964252{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:15:06,315 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:06,315 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@505a3c39{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:15:06,434 INFO [Listener at localhost/44679] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:15:06,435 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:15:06,435 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:15:06,435 INFO [Listener at localhost/44679] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 10:15:06,436 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:06,437 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4316d895{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir/jetty-0_0_0_0-37757-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8299577949312524288/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:06,438 INFO [Listener at localhost/44679] server.AbstractConnector(333): Started ServerConnector@b0c4942{HTTP/1.1, (http/1.1)}{0.0.0.0:37757} 2023-07-18 10:15:06,439 INFO [Listener at localhost/44679] server.Server(415): Started @42150ms 2023-07-18 10:15:06,443 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:06,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@44575355{HTTP/1.1, (http/1.1)}{0.0.0.0:37431} 2023-07-18 10:15:06,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42165ms 2023-07-18 10:15:06,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:06,455 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 10:15:06,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:06,458 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:06,458 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:06,459 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:06,458 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:06,459 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:06,461 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 10:15:06,462 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 10:15:06,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46153,1689675305766 from backup master directory 2023-07-18 10:15:06,467 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:06,467 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:15:06,467 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 10:15:06,467 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:06,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/hbase.id with ID: b2e60d7b-9c25-49d3-bf8a-79a5bdfb4c40 2023-07-18 10:15:06,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:06,499 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:06,513 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0d9e62ef to 127.0.0.1:56417 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:06,517 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24bc6172, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:06,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:06,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 10:15:06,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:06,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/data/master/store-tmp 2023-07-18 10:15:06,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:06,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 10:15:06,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:06,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:06,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 10:15:06,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:06,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:06,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 10:15:06,534 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/WALs/jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:06,537 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46153%2C1689675305766, suffix=, logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/WALs/jenkins-hbase4.apache.org,46153,1689675305766, archiveDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/oldWALs, maxLogs=10 2023-07-18 10:15:06,558 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK] 2023-07-18 10:15:06,563 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK] 2023-07-18 10:15:06,563 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK] 2023-07-18 10:15:06,565 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/WALs/jenkins-hbase4.apache.org,46153,1689675305766/jenkins-hbase4.apache.org%2C46153%2C1689675305766.1689675306537 2023-07-18 10:15:06,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK], DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK], DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK]] 2023-07-18 10:15:06,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:06,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:06,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:06,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:06,567 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:06,568 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 10:15:06,568 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 10:15:06,569 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:06,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:06,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:06,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 10:15:06,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:06,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11947415520, jitterRate=0.1126897782087326}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:15:06,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 10:15:06,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 10:15:06,577 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 10:15:06,577 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 10:15:06,577 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 10:15:06,578 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 10:15:06,578 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 10:15:06,578 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 10:15:06,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 10:15:06,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 10:15:06,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 10:15:06,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 10:15:06,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 10:15:06,583 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:06,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 10:15:06,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 10:15:06,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 10:15:06,587 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:06,587 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:06,587 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:06,587 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:06,587 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:06,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46153,1689675305766, sessionid=0x10177ed96110000, setting cluster-up flag (Was=false) 2023-07-18 10:15:06,594 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:06,599 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 10:15:06,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:06,604 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:06,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 10:15:06,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:06,611 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.hbase-snapshot/.tmp 2023-07-18 10:15:06,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 10:15:06,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 10:15:06,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 10:15:06,617 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:15:06,617 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 10:15:06,618 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 10:15:06,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 10:15:06,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 10:15:06,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 10:15:06,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 10:15:06,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:15:06,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:15:06,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:15:06,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 10:15:06,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 10:15:06,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:15:06,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689675336636 2023-07-18 10:15:06,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 10:15:06,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 10:15:06,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 10:15:06,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 10:15:06,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 10:15:06,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 10:15:06,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,639 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 10:15:06,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 10:15:06,640 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 10:15:06,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 10:15:06,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 10:15:06,642 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(951): ClusterId : b2e60d7b-9c25-49d3-bf8a-79a5bdfb4c40 2023-07-18 10:15:06,642 DEBUG [RS:0;jenkins-hbase4:37027] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:15:06,642 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(951): ClusterId : b2e60d7b-9c25-49d3-bf8a-79a5bdfb4c40 2023-07-18 10:15:06,642 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:06,642 DEBUG [RS:1;jenkins-hbase4:35165] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:15:06,645 DEBUG [RS:0;jenkins-hbase4:37027] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:15:06,645 DEBUG [RS:0;jenkins-hbase4:37027] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:15:06,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 10:15:06,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 10:15:06,648 DEBUG [RS:1;jenkins-hbase4:35165] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:15:06,649 DEBUG [RS:1;jenkins-hbase4:35165] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:15:06,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675306648,5,FailOnTimeoutGroup] 2023-07-18 10:15:06,649 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675306649,5,FailOnTimeoutGroup] 2023-07-18 10:15:06,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 10:15:06,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,651 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(951): ClusterId : b2e60d7b-9c25-49d3-bf8a-79a5bdfb4c40 2023-07-18 10:15:06,651 DEBUG [RS:2;jenkins-hbase4:40717] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:15:06,651 DEBUG [RS:0;jenkins-hbase4:37027] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:15:06,653 DEBUG [RS:0;jenkins-hbase4:37027] zookeeper.ReadOnlyZKClient(139): Connect 0x1306cf76 to 127.0.0.1:56417 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:06,654 DEBUG [RS:2;jenkins-hbase4:40717] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:15:06,654 DEBUG [RS:1;jenkins-hbase4:35165] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:15:06,654 DEBUG [RS:2;jenkins-hbase4:40717] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:15:06,661 DEBUG [RS:1;jenkins-hbase4:35165] zookeeper.ReadOnlyZKClient(139): Connect 0x7266baae to 127.0.0.1:56417 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:06,667 DEBUG [RS:2;jenkins-hbase4:40717] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:15:06,669 DEBUG [RS:2;jenkins-hbase4:40717] zookeeper.ReadOnlyZKClient(139): Connect 0x238c9294 to 127.0.0.1:56417 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:06,669 DEBUG [RS:0;jenkins-hbase4:37027] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1dc12366, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:06,669 DEBUG [RS:0;jenkins-hbase4:37027] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19fea406, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:15:06,672 DEBUG [RS:1;jenkins-hbase4:35165] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d6f0b5d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:06,673 DEBUG [RS:1;jenkins-hbase4:35165] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@435ec97d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:15:06,674 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:06,675 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:06,675 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792 2023-07-18 10:15:06,676 DEBUG [RS:2;jenkins-hbase4:40717] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d4b7895, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:06,676 DEBUG [RS:2;jenkins-hbase4:40717] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2974f9e4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:15:06,681 DEBUG [RS:1;jenkins-hbase4:35165] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35165 2023-07-18 10:15:06,682 INFO [RS:1;jenkins-hbase4:35165] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:15:06,682 INFO [RS:1;jenkins-hbase4:35165] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:15:06,682 DEBUG [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:15:06,682 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46153,1689675305766 with isa=jenkins-hbase4.apache.org/172.31.14.131:35165, startcode=1689675306111 2023-07-18 10:15:06,682 DEBUG [RS:1;jenkins-hbase4:35165] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:15:06,684 DEBUG [RS:0;jenkins-hbase4:37027] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37027 2023-07-18 10:15:06,684 INFO [RS:0;jenkins-hbase4:37027] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:15:06,684 INFO [RS:0;jenkins-hbase4:37027] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:15:06,684 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40827, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:15:06,684 DEBUG [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:15:06,686 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46153] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:06,686 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:15:06,686 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 10:15:06,686 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46153,1689675305766 with isa=jenkins-hbase4.apache.org/172.31.14.131:37027, startcode=1689675305953 2023-07-18 10:15:06,687 DEBUG [RS:0;jenkins-hbase4:37027] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:15:06,687 DEBUG [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792 2023-07-18 10:15:06,687 DEBUG [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39145 2023-07-18 10:15:06,687 DEBUG [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40415 2023-07-18 10:15:06,688 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37281, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:15:06,688 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:06,688 DEBUG [RS:2;jenkins-hbase4:40717] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:40717 2023-07-18 10:15:06,688 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46153] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:06,688 INFO [RS:2;jenkins-hbase4:40717] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:15:06,688 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:15:06,688 INFO [RS:2;jenkins-hbase4:40717] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:15:06,688 DEBUG [RS:1;jenkins-hbase4:35165] zookeeper.ZKUtil(162): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:06,688 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 10:15:06,689 WARN [RS:1;jenkins-hbase4:35165] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:15:06,688 DEBUG [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:15:06,689 INFO [RS:1;jenkins-hbase4:35165] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:06,689 DEBUG [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792 2023-07-18 10:15:06,689 DEBUG [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:06,689 DEBUG [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39145 2023-07-18 10:15:06,689 DEBUG [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40415 2023-07-18 10:15:06,689 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46153,1689675305766 with isa=jenkins-hbase4.apache.org/172.31.14.131:40717, startcode=1689675306287 2023-07-18 10:15:06,689 DEBUG [RS:2;jenkins-hbase4:40717] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:15:06,691 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35689, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:15:06,691 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46153] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:06,692 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:15:06,692 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 10:15:06,692 DEBUG [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792 2023-07-18 10:15:06,692 DEBUG [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39145 2023-07-18 10:15:06,692 DEBUG [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40415 2023-07-18 10:15:06,694 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37027,1689675305953] 2023-07-18 10:15:06,694 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35165,1689675306111] 2023-07-18 10:15:06,695 DEBUG [RS:0;jenkins-hbase4:37027] zookeeper.ZKUtil(162): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:06,695 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:06,695 WARN [RS:0;jenkins-hbase4:37027] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:15:06,695 INFO [RS:0;jenkins-hbase4:37027] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:06,696 DEBUG [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:06,696 DEBUG [RS:2;jenkins-hbase4:40717] zookeeper.ZKUtil(162): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:06,696 WARN [RS:2;jenkins-hbase4:40717] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:15:06,696 INFO [RS:2;jenkins-hbase4:40717] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:06,697 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40717,1689675306287] 2023-07-18 10:15:06,697 DEBUG [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:06,697 DEBUG [RS:1;jenkins-hbase4:35165] zookeeper.ZKUtil(162): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:06,698 DEBUG [RS:1;jenkins-hbase4:35165] zookeeper.ZKUtil(162): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:06,700 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:06,700 DEBUG [RS:1;jenkins-hbase4:35165] zookeeper.ZKUtil(162): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:06,703 DEBUG [RS:1;jenkins-hbase4:35165] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:15:06,704 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 10:15:06,704 INFO [RS:1;jenkins-hbase4:35165] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:15:06,704 DEBUG [RS:0;jenkins-hbase4:37027] zookeeper.ZKUtil(162): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:06,705 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/info 2023-07-18 10:15:06,705 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 10:15:06,706 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:06,706 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 10:15:06,706 INFO [RS:1;jenkins-hbase4:35165] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:15:06,706 DEBUG [RS:0;jenkins-hbase4:37027] zookeeper.ZKUtil(162): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:06,706 DEBUG [RS:2;jenkins-hbase4:40717] zookeeper.ZKUtil(162): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:06,706 INFO [RS:1;jenkins-hbase4:35165] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:15:06,707 INFO [RS:1;jenkins-hbase4:35165] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,707 DEBUG [RS:0;jenkins-hbase4:37027] zookeeper.ZKUtil(162): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:06,707 DEBUG [RS:2;jenkins-hbase4:40717] zookeeper.ZKUtil(162): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:06,707 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:15:06,708 DEBUG [RS:2;jenkins-hbase4:40717] zookeeper.ZKUtil(162): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:06,708 DEBUG [RS:0;jenkins-hbase4:37027] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:15:06,708 INFO [RS:1;jenkins-hbase4:35165] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,708 INFO [RS:0;jenkins-hbase4:37027] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:15:06,709 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:15:06,709 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,709 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,709 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,709 DEBUG [RS:2;jenkins-hbase4:40717] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:15:06,709 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 10:15:06,709 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,709 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,709 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:15:06,710 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,710 INFO [RS:2;jenkins-hbase4:40717] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:15:06,710 INFO [RS:0;jenkins-hbase4:37027] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:15:06,710 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,710 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,710 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:06,710 DEBUG [RS:1;jenkins-hbase4:35165] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,710 INFO [RS:0;jenkins-hbase4:37027] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:15:06,710 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 10:15:06,710 INFO [RS:0;jenkins-hbase4:37027] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,712 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/table 2023-07-18 10:15:06,712 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 10:15:06,713 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:06,716 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:15:06,716 INFO [RS:2;jenkins-hbase4:40717] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:15:06,716 INFO [RS:1;jenkins-hbase4:35165] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,716 INFO [RS:1;jenkins-hbase4:35165] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,716 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740 2023-07-18 10:15:06,716 INFO [RS:1;jenkins-hbase4:35165] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,716 INFO [RS:2;jenkins-hbase4:40717] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:15:06,716 INFO [RS:2;jenkins-hbase4:40717] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,718 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740 2023-07-18 10:15:06,718 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:15:06,719 INFO [RS:0;jenkins-hbase4:37027] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,721 DEBUG [RS:0;jenkins-hbase4:37027] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,728 INFO [RS:0;jenkins-hbase4:37027] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,728 INFO [RS:0;jenkins-hbase4:37027] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,728 INFO [RS:0;jenkins-hbase4:37027] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,729 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 10:15:06,731 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 10:15:06,734 INFO [RS:2;jenkins-hbase4:40717] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,736 INFO [RS:1;jenkins-hbase4:35165] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:15:06,736 INFO [RS:1;jenkins-hbase4:35165] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35165,1689675306111-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,737 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:06,737 DEBUG [RS:2;jenkins-hbase4:40717] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:06,738 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10164085120, jitterRate=-0.05339580774307251}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 10:15:06,738 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 10:15:06,738 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 10:15:06,738 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 10:15:06,738 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 10:15:06,738 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 10:15:06,738 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 10:15:06,743 INFO [RS:2;jenkins-hbase4:40717] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,743 INFO [RS:2;jenkins-hbase4:40717] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,743 INFO [RS:2;jenkins-hbase4:40717] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,744 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 10:15:06,744 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 10:15:06,749 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 10:15:06,749 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 10:15:06,749 INFO [RS:0;jenkins-hbase4:37027] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:15:06,749 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 10:15:06,749 INFO [RS:0;jenkins-hbase4:37027] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37027,1689675305953-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,755 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 10:15:06,755 INFO [RS:2;jenkins-hbase4:40717] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:15:06,755 INFO [RS:2;jenkins-hbase4:40717] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40717,1689675306287-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:06,758 INFO [RS:1;jenkins-hbase4:35165] regionserver.Replication(203): jenkins-hbase4.apache.org,35165,1689675306111 started 2023-07-18 10:15:06,760 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35165,1689675306111, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35165, sessionid=0x10177ed96110002 2023-07-18 10:15:06,760 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 10:15:06,762 DEBUG [RS:1;jenkins-hbase4:35165] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:15:06,763 DEBUG [RS:1;jenkins-hbase4:35165] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:06,763 DEBUG [RS:1;jenkins-hbase4:35165] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35165,1689675306111' 2023-07-18 10:15:06,763 DEBUG [RS:1;jenkins-hbase4:35165] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:15:06,763 DEBUG [RS:1;jenkins-hbase4:35165] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:15:06,764 DEBUG [RS:1;jenkins-hbase4:35165] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:15:06,764 DEBUG [RS:1;jenkins-hbase4:35165] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:15:06,764 DEBUG [RS:1;jenkins-hbase4:35165] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:06,764 DEBUG [RS:1;jenkins-hbase4:35165] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35165,1689675306111' 2023-07-18 10:15:06,764 DEBUG [RS:1;jenkins-hbase4:35165] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:15:06,764 DEBUG [RS:1;jenkins-hbase4:35165] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:15:06,765 DEBUG [RS:1;jenkins-hbase4:35165] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:15:06,765 INFO [RS:1;jenkins-hbase4:35165] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 10:15:06,765 INFO [RS:1;jenkins-hbase4:35165] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 10:15:06,768 INFO [RS:0;jenkins-hbase4:37027] regionserver.Replication(203): jenkins-hbase4.apache.org,37027,1689675305953 started 2023-07-18 10:15:06,768 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37027,1689675305953, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37027, sessionid=0x10177ed96110001 2023-07-18 10:15:06,768 DEBUG [RS:0;jenkins-hbase4:37027] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:15:06,768 DEBUG [RS:0;jenkins-hbase4:37027] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:06,769 DEBUG [RS:0;jenkins-hbase4:37027] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37027,1689675305953' 2023-07-18 10:15:06,769 DEBUG [RS:0;jenkins-hbase4:37027] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:15:06,769 DEBUG [RS:0;jenkins-hbase4:37027] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:15:06,769 DEBUG [RS:0;jenkins-hbase4:37027] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:15:06,769 DEBUG [RS:0;jenkins-hbase4:37027] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:15:06,770 DEBUG [RS:0;jenkins-hbase4:37027] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:06,770 DEBUG [RS:0;jenkins-hbase4:37027] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37027,1689675305953' 2023-07-18 10:15:06,770 DEBUG [RS:0;jenkins-hbase4:37027] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:15:06,770 DEBUG [RS:0;jenkins-hbase4:37027] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:15:06,770 DEBUG [RS:0;jenkins-hbase4:37027] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:15:06,771 INFO [RS:0;jenkins-hbase4:37027] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 10:15:06,771 INFO [RS:0;jenkins-hbase4:37027] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 10:15:06,786 INFO [RS:2;jenkins-hbase4:40717] regionserver.Replication(203): jenkins-hbase4.apache.org,40717,1689675306287 started 2023-07-18 10:15:06,787 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40717,1689675306287, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40717, sessionid=0x10177ed96110003 2023-07-18 10:15:06,787 DEBUG [RS:2;jenkins-hbase4:40717] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:15:06,787 DEBUG [RS:2;jenkins-hbase4:40717] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:06,787 DEBUG [RS:2;jenkins-hbase4:40717] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40717,1689675306287' 2023-07-18 10:15:06,787 DEBUG [RS:2;jenkins-hbase4:40717] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:15:06,787 DEBUG [RS:2;jenkins-hbase4:40717] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:15:06,788 DEBUG [RS:2;jenkins-hbase4:40717] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:15:06,788 DEBUG [RS:2;jenkins-hbase4:40717] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:15:06,788 DEBUG [RS:2;jenkins-hbase4:40717] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:06,788 DEBUG [RS:2;jenkins-hbase4:40717] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40717,1689675306287' 2023-07-18 10:15:06,788 DEBUG [RS:2;jenkins-hbase4:40717] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:15:06,788 DEBUG [RS:2;jenkins-hbase4:40717] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:15:06,789 DEBUG [RS:2;jenkins-hbase4:40717] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:15:06,789 INFO [RS:2;jenkins-hbase4:40717] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 10:15:06,789 INFO [RS:2;jenkins-hbase4:40717] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 10:15:06,867 INFO [RS:1;jenkins-hbase4:35165] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35165%2C1689675306111, suffix=, logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,35165,1689675306111, archiveDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs, maxLogs=32 2023-07-18 10:15:06,872 INFO [RS:0;jenkins-hbase4:37027] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37027%2C1689675305953, suffix=, logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,37027,1689675305953, archiveDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs, maxLogs=32 2023-07-18 10:15:06,884 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK] 2023-07-18 10:15:06,884 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK] 2023-07-18 10:15:06,889 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK] 2023-07-18 10:15:06,891 INFO [RS:2;jenkins-hbase4:40717] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40717%2C1689675306287, suffix=, logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,40717,1689675306287, archiveDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs, maxLogs=32 2023-07-18 10:15:06,908 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK] 2023-07-18 10:15:06,908 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK] 2023-07-18 10:15:06,908 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK] 2023-07-18 10:15:06,909 INFO [RS:1;jenkins-hbase4:35165] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,35165,1689675306111/jenkins-hbase4.apache.org%2C35165%2C1689675306111.1689675306867 2023-07-18 10:15:06,912 DEBUG [RS:1;jenkins-hbase4:35165] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK], DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK], DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK]] 2023-07-18 10:15:06,913 DEBUG [jenkins-hbase4:46153] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 10:15:06,913 DEBUG [jenkins-hbase4:46153] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:15:06,914 DEBUG [jenkins-hbase4:46153] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:15:06,914 DEBUG [jenkins-hbase4:46153] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:15:06,914 DEBUG [jenkins-hbase4:46153] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:15:06,914 DEBUG [jenkins-hbase4:46153] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:15:06,915 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40717,1689675306287, state=OPENING 2023-07-18 10:15:06,915 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK] 2023-07-18 10:15:06,915 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK] 2023-07-18 10:15:06,915 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK] 2023-07-18 10:15:06,917 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 10:15:06,917 INFO [RS:0;jenkins-hbase4:37027] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,37027,1689675305953/jenkins-hbase4.apache.org%2C37027%2C1689675305953.1689675306873 2023-07-18 10:15:06,917 DEBUG [RS:0;jenkins-hbase4:37027] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK], DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK], DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK]] 2023-07-18 10:15:06,918 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:06,919 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40717,1689675306287}] 2023-07-18 10:15:06,919 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 10:15:06,920 INFO [RS:2;jenkins-hbase4:40717] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,40717,1689675306287/jenkins-hbase4.apache.org%2C40717%2C1689675306287.1689675306891 2023-07-18 10:15:06,920 DEBUG [RS:2;jenkins-hbase4:40717] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK], DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK], DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK]] 2023-07-18 10:15:06,923 WARN [ReadOnlyZKClient-127.0.0.1:56417@0x0d9e62ef] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 10:15:06,923 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46153,1689675305766] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:15:06,924 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43746, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:15:06,925 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40717] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:43746 deadline: 1689675366924, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:07,049 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-18 10:15:07,073 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:07,074 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:15:07,076 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43760, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:15:07,080 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 10:15:07,080 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:07,083 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40717%2C1689675306287.meta, suffix=.meta, logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,40717,1689675306287, archiveDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs, maxLogs=32 2023-07-18 10:15:07,101 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK] 2023-07-18 10:15:07,101 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK] 2023-07-18 10:15:07,103 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK] 2023-07-18 10:15:07,111 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,40717,1689675306287/jenkins-hbase4.apache.org%2C40717%2C1689675306287.meta.1689675307083.meta 2023-07-18 10:15:07,112 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK], DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK], DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK]] 2023-07-18 10:15:07,112 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:07,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 10:15:07,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 10:15:07,113 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 10:15:07,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 10:15:07,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:07,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 10:15:07,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 10:15:07,116 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 10:15:07,117 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/info 2023-07-18 10:15:07,117 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/info 2023-07-18 10:15:07,117 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 10:15:07,118 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:07,118 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 10:15:07,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:15:07,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/rep_barrier 2023-07-18 10:15:07,121 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 10:15:07,122 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:07,122 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 10:15:07,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/table 2023-07-18 10:15:07,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/table 2023-07-18 10:15:07,123 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 10:15:07,124 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:07,127 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740 2023-07-18 10:15:07,131 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740 2023-07-18 10:15:07,135 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 10:15:07,139 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 10:15:07,143 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9569760160, jitterRate=-0.10874663293361664}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 10:15:07,143 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 10:15:07,148 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689675307073 2023-07-18 10:15:07,153 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 10:15:07,154 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 10:15:07,159 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40717,1689675306287, state=OPEN 2023-07-18 10:15:07,164 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 10:15:07,164 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 10:15:07,166 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 10:15:07,166 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40717,1689675306287 in 245 msec 2023-07-18 10:15:07,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 10:15:07,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 417 msec 2023-07-18 10:15:07,179 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 554 msec 2023-07-18 10:15:07,179 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689675307179, completionTime=-1 2023-07-18 10:15:07,179 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 10:15:07,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 10:15:07,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 10:15:07,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689675367184 2023-07-18 10:15:07,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689675427184 2023-07-18 10:15:07,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-18 10:15:07,190 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46153,1689675305766-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:07,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46153,1689675305766-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:07,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46153,1689675305766-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:07,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46153, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:07,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:07,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 10:15:07,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:07,192 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 10:15:07,200 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:15:07,201 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 10:15:07,201 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:15:07,203 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:07,204 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8 empty. 2023-07-18 10:15:07,204 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:07,204 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 10:15:07,228 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46153,1689675305766] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:07,231 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46153,1689675305766] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 10:15:07,233 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:15:07,234 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:15:07,240 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:07,241 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:07,241 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c empty. 2023-07-18 10:15:07,242 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:07,242 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 10:15:07,243 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e516bbec513a2690d38980a1e6d81fa8, NAME => 'hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp 2023-07-18 10:15:07,267 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:07,267 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e516bbec513a2690d38980a1e6d81fa8, disabling compactions & flushes 2023-07-18 10:15:07,267 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:07,267 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:07,267 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. after waiting 0 ms 2023-07-18 10:15:07,267 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:07,267 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:07,267 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e516bbec513a2690d38980a1e6d81fa8: 2023-07-18 10:15:07,270 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:15:07,272 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675307272"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675307272"}]},"ts":"1689675307272"} 2023-07-18 10:15:07,276 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:07,276 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:15:07,277 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:15:07,277 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675307277"}]},"ts":"1689675307277"} 2023-07-18 10:15:07,277 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 61b3dc7a57f4e33b37513ac05598296c, NAME => 'hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp 2023-07-18 10:15:07,278 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 10:15:07,283 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:15:07,283 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:15:07,283 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:15:07,283 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:15:07,283 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:15:07,284 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e516bbec513a2690d38980a1e6d81fa8, ASSIGN}] 2023-07-18 10:15:07,285 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e516bbec513a2690d38980a1e6d81fa8, ASSIGN 2023-07-18 10:15:07,285 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e516bbec513a2690d38980a1e6d81fa8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37027,1689675305953; forceNewPlan=false, retain=false 2023-07-18 10:15:07,288 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:07,288 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 61b3dc7a57f4e33b37513ac05598296c, disabling compactions & flushes 2023-07-18 10:15:07,288 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:07,288 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:07,288 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. after waiting 0 ms 2023-07-18 10:15:07,288 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:07,288 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:07,288 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 61b3dc7a57f4e33b37513ac05598296c: 2023-07-18 10:15:07,290 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:15:07,292 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675307292"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675307292"}]},"ts":"1689675307292"} 2023-07-18 10:15:07,293 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:15:07,294 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:15:07,294 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675307294"}]},"ts":"1689675307294"} 2023-07-18 10:15:07,295 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 10:15:07,299 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:15:07,299 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:15:07,299 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:15:07,299 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:15:07,299 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:15:07,299 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=61b3dc7a57f4e33b37513ac05598296c, ASSIGN}] 2023-07-18 10:15:07,302 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=61b3dc7a57f4e33b37513ac05598296c, ASSIGN 2023-07-18 10:15:07,302 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=61b3dc7a57f4e33b37513ac05598296c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35165,1689675306111; forceNewPlan=false, retain=false 2023-07-18 10:15:07,303 INFO [jenkins-hbase4:46153] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 10:15:07,305 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=e516bbec513a2690d38980a1e6d81fa8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:07,305 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675307305"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675307305"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675307305"}]},"ts":"1689675307305"} 2023-07-18 10:15:07,305 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=61b3dc7a57f4e33b37513ac05598296c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:07,305 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675307305"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675307305"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675307305"}]},"ts":"1689675307305"} 2023-07-18 10:15:07,307 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure e516bbec513a2690d38980a1e6d81fa8, server=jenkins-hbase4.apache.org,37027,1689675305953}] 2023-07-18 10:15:07,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 61b3dc7a57f4e33b37513ac05598296c, server=jenkins-hbase4.apache.org,35165,1689675306111}] 2023-07-18 10:15:07,461 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:07,461 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:07,461 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:15:07,462 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:15:07,463 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43466, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:15:07,463 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48948, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:15:07,467 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:07,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61b3dc7a57f4e33b37513ac05598296c, NAME => 'hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:07,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 10:15:07,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. service=MultiRowMutationService 2023-07-18 10:15:07,468 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 10:15:07,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:07,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:07,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:07,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:07,471 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:07,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e516bbec513a2690d38980a1e6d81fa8, NAME => 'hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:07,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:07,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:07,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:07,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:07,471 INFO [StoreOpener-61b3dc7a57f4e33b37513ac05598296c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:07,473 INFO [StoreOpener-e516bbec513a2690d38980a1e6d81fa8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:07,473 DEBUG [StoreOpener-61b3dc7a57f4e33b37513ac05598296c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c/m 2023-07-18 10:15:07,473 DEBUG [StoreOpener-61b3dc7a57f4e33b37513ac05598296c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c/m 2023-07-18 10:15:07,473 INFO [StoreOpener-61b3dc7a57f4e33b37513ac05598296c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61b3dc7a57f4e33b37513ac05598296c columnFamilyName m 2023-07-18 10:15:07,474 DEBUG [StoreOpener-e516bbec513a2690d38980a1e6d81fa8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8/info 2023-07-18 10:15:07,474 DEBUG [StoreOpener-e516bbec513a2690d38980a1e6d81fa8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8/info 2023-07-18 10:15:07,474 INFO [StoreOpener-61b3dc7a57f4e33b37513ac05598296c-1] regionserver.HStore(310): Store=61b3dc7a57f4e33b37513ac05598296c/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:07,474 INFO [StoreOpener-e516bbec513a2690d38980a1e6d81fa8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e516bbec513a2690d38980a1e6d81fa8 columnFamilyName info 2023-07-18 10:15:07,475 INFO [StoreOpener-e516bbec513a2690d38980a1e6d81fa8-1] regionserver.HStore(310): Store=e516bbec513a2690d38980a1e6d81fa8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:07,475 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:07,475 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:07,475 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:07,476 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:07,478 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:07,478 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:07,482 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:07,482 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:07,483 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 61b3dc7a57f4e33b37513ac05598296c; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@32625a6a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:15:07,483 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e516bbec513a2690d38980a1e6d81fa8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9756288800, jitterRate=-0.09137479960918427}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:15:07,483 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 61b3dc7a57f4e33b37513ac05598296c: 2023-07-18 10:15:07,483 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e516bbec513a2690d38980a1e6d81fa8: 2023-07-18 10:15:07,484 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c., pid=9, masterSystemTime=1689675307461 2023-07-18 10:15:07,484 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8., pid=8, masterSystemTime=1689675307461 2023-07-18 10:15:07,489 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:07,490 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:07,490 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=61b3dc7a57f4e33b37513ac05598296c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:07,490 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689675307490"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675307490"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675307490"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675307490"}]},"ts":"1689675307490"} 2023-07-18 10:15:07,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:07,491 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:07,492 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=e516bbec513a2690d38980a1e6d81fa8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:07,492 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689675307492"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675307492"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675307492"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675307492"}]},"ts":"1689675307492"} 2023-07-18 10:15:07,494 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 10:15:07,494 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 61b3dc7a57f4e33b37513ac05598296c, server=jenkins-hbase4.apache.org,35165,1689675306111 in 184 msec 2023-07-18 10:15:07,495 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 10:15:07,495 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure e516bbec513a2690d38980a1e6d81fa8, server=jenkins-hbase4.apache.org,37027,1689675305953 in 186 msec 2023-07-18 10:15:07,497 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 10:15:07,497 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=61b3dc7a57f4e33b37513ac05598296c, ASSIGN in 195 msec 2023-07-18 10:15:07,498 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:15:07,498 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-18 10:15:07,498 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e516bbec513a2690d38980a1e6d81fa8, ASSIGN in 211 msec 2023-07-18 10:15:07,498 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675307498"}]},"ts":"1689675307498"} 2023-07-18 10:15:07,499 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:15:07,499 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675307499"}]},"ts":"1689675307499"} 2023-07-18 10:15:07,500 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 10:15:07,501 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 10:15:07,502 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:15:07,504 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 275 msec 2023-07-18 10:15:07,505 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:15:07,506 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 314 msec 2023-07-18 10:15:07,534 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46153,1689675305766] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:15:07,536 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43468, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:15:07,538 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 10:15:07,538 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 10:15:07,544 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:07,544 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:07,546 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 10:15:07,547 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 10:15:07,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 10:15:07,597 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:07,597 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:07,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:15:07,602 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48956, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:15:07,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 10:15:07,612 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:07,615 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-18 10:15:07,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 10:15:07,633 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:07,637 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-18 10:15:07,651 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 10:15:07,654 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 10:15:07,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.187sec 2023-07-18 10:15:07,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 10:15:07,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 10:15:07,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 10:15:07,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46153,1689675305766-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 10:15:07,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46153,1689675305766-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 10:15:07,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 10:15:07,751 DEBUG [Listener at localhost/44679] zookeeper.ReadOnlyZKClient(139): Connect 0x58a1b5a8 to 127.0.0.1:56417 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:07,757 DEBUG [Listener at localhost/44679] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b91e364, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:07,758 DEBUG [hconnection-0x4e3646f3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:15:07,760 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43762, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:15:07,761 INFO [Listener at localhost/44679] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:07,762 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:07,764 DEBUG [Listener at localhost/44679] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 10:15:07,769 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36172, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 10:15:07,772 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 10:15:07,773 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:07,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 10:15:07,774 DEBUG [Listener at localhost/44679] zookeeper.ReadOnlyZKClient(139): Connect 0x1f28e8dd to 127.0.0.1:56417 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:07,781 DEBUG [Listener at localhost/44679] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63d52415, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:07,782 INFO [Listener at localhost/44679] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56417 2023-07-18 10:15:07,786 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:07,786 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10177ed9611000a connected 2023-07-18 10:15:07,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:07,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:07,791 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 10:15:07,803 INFO [Listener at localhost/44679] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 10:15:07,803 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:07,803 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:07,803 INFO [Listener at localhost/44679] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 10:15:07,803 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 10:15:07,803 INFO [Listener at localhost/44679] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 10:15:07,803 INFO [Listener at localhost/44679] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 10:15:07,804 INFO [Listener at localhost/44679] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37193 2023-07-18 10:15:07,805 INFO [Listener at localhost/44679] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 10:15:07,806 DEBUG [Listener at localhost/44679] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 10:15:07,806 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:07,807 INFO [Listener at localhost/44679] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 10:15:07,808 INFO [Listener at localhost/44679] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37193 connecting to ZooKeeper ensemble=127.0.0.1:56417 2023-07-18 10:15:07,811 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:371930x0, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 10:15:07,814 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(162): regionserver:371930x0, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 10:15:07,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37193-0x10177ed9611000b connected 2023-07-18 10:15:07,815 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(162): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 10:15:07,816 DEBUG [Listener at localhost/44679] zookeeper.ZKUtil(164): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 10:15:07,816 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37193 2023-07-18 10:15:07,817 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37193 2023-07-18 10:15:07,818 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37193 2023-07-18 10:15:07,822 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37193 2023-07-18 10:15:07,822 DEBUG [Listener at localhost/44679] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37193 2023-07-18 10:15:07,824 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 10:15:07,824 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 10:15:07,824 INFO [Listener at localhost/44679] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 10:15:07,825 INFO [Listener at localhost/44679] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 10:15:07,825 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 10:15:07,825 INFO [Listener at localhost/44679] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 10:15:07,825 INFO [Listener at localhost/44679] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 10:15:07,825 INFO [Listener at localhost/44679] http.HttpServer(1146): Jetty bound to port 45549 2023-07-18 10:15:07,825 INFO [Listener at localhost/44679] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 10:15:07,827 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:07,827 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@38fe434a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,AVAILABLE} 2023-07-18 10:15:07,827 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:07,827 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4858353d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 10:15:07,942 INFO [Listener at localhost/44679] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 10:15:07,943 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 10:15:07,943 INFO [Listener at localhost/44679] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 10:15:07,943 INFO [Listener at localhost/44679] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 10:15:07,944 INFO [Listener at localhost/44679] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 10:15:07,945 INFO [Listener at localhost/44679] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3da779a5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/java.io.tmpdir/jetty-0_0_0_0-45549-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7833341285175077198/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:07,946 INFO [Listener at localhost/44679] server.AbstractConnector(333): Started ServerConnector@2a301633{HTTP/1.1, (http/1.1)}{0.0.0.0:45549} 2023-07-18 10:15:07,946 INFO [Listener at localhost/44679] server.Server(415): Started @43658ms 2023-07-18 10:15:07,948 INFO [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(951): ClusterId : b2e60d7b-9c25-49d3-bf8a-79a5bdfb4c40 2023-07-18 10:15:07,949 DEBUG [RS:3;jenkins-hbase4:37193] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 10:15:07,950 DEBUG [RS:3;jenkins-hbase4:37193] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 10:15:07,950 DEBUG [RS:3;jenkins-hbase4:37193] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 10:15:07,953 DEBUG [RS:3;jenkins-hbase4:37193] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 10:15:07,954 DEBUG [RS:3;jenkins-hbase4:37193] zookeeper.ReadOnlyZKClient(139): Connect 0x1bb9e2c0 to 127.0.0.1:56417 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 10:15:07,958 DEBUG [RS:3;jenkins-hbase4:37193] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@157e0fc7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 10:15:07,958 DEBUG [RS:3;jenkins-hbase4:37193] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@114189b0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:15:07,966 DEBUG [RS:3;jenkins-hbase4:37193] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:37193 2023-07-18 10:15:07,967 INFO [RS:3;jenkins-hbase4:37193] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 10:15:07,967 INFO [RS:3;jenkins-hbase4:37193] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 10:15:07,967 DEBUG [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 10:15:07,967 INFO [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46153,1689675305766 with isa=jenkins-hbase4.apache.org/172.31.14.131:37193, startcode=1689675307802 2023-07-18 10:15:07,967 DEBUG [RS:3;jenkins-hbase4:37193] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 10:15:07,969 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34469, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 10:15:07,969 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46153] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:07,970 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 10:15:07,970 DEBUG [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792 2023-07-18 10:15:07,970 DEBUG [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39145 2023-07-18 10:15:07,970 DEBUG [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40415 2023-07-18 10:15:07,978 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:07,978 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:07,979 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:07,979 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:07,978 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:07,979 DEBUG [RS:3;jenkins-hbase4:37193] zookeeper.ZKUtil(162): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:07,979 WARN [RS:3;jenkins-hbase4:37193] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 10:15:07,979 INFO [RS:3;jenkins-hbase4:37193] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 10:15:07,979 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 10:15:07,979 DEBUG [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:07,979 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37193,1689675307802] 2023-07-18 10:15:07,980 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:07,983 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 10:15:07,983 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:07,983 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:07,983 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:07,984 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:07,984 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:07,984 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:07,984 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:07,984 DEBUG [RS:3;jenkins-hbase4:37193] zookeeper.ZKUtil(162): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:07,984 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:07,984 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:07,984 DEBUG [RS:3;jenkins-hbase4:37193] zookeeper.ZKUtil(162): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:07,984 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:07,985 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:07,985 DEBUG [RS:3;jenkins-hbase4:37193] zookeeper.ZKUtil(162): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:07,985 DEBUG [RS:3;jenkins-hbase4:37193] zookeeper.ZKUtil(162): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:07,986 DEBUG [RS:3;jenkins-hbase4:37193] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 10:15:07,986 INFO [RS:3;jenkins-hbase4:37193] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 10:15:07,987 INFO [RS:3;jenkins-hbase4:37193] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 10:15:07,988 INFO [RS:3;jenkins-hbase4:37193] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 10:15:07,988 INFO [RS:3;jenkins-hbase4:37193] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:07,988 INFO [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 10:15:07,989 INFO [RS:3;jenkins-hbase4:37193] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:07,990 DEBUG [RS:3;jenkins-hbase4:37193] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 10:15:07,991 INFO [RS:3;jenkins-hbase4:37193] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:07,991 INFO [RS:3;jenkins-hbase4:37193] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:07,991 INFO [RS:3;jenkins-hbase4:37193] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:08,002 INFO [RS:3;jenkins-hbase4:37193] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 10:15:08,002 INFO [RS:3;jenkins-hbase4:37193] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37193,1689675307802-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 10:15:08,012 INFO [RS:3;jenkins-hbase4:37193] regionserver.Replication(203): jenkins-hbase4.apache.org,37193,1689675307802 started 2023-07-18 10:15:08,012 INFO [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37193,1689675307802, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37193, sessionid=0x10177ed9611000b 2023-07-18 10:15:08,012 DEBUG [RS:3;jenkins-hbase4:37193] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 10:15:08,012 DEBUG [RS:3;jenkins-hbase4:37193] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:08,012 DEBUG [RS:3;jenkins-hbase4:37193] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37193,1689675307802' 2023-07-18 10:15:08,012 DEBUG [RS:3;jenkins-hbase4:37193] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 10:15:08,012 DEBUG [RS:3;jenkins-hbase4:37193] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 10:15:08,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:15:08,013 DEBUG [RS:3;jenkins-hbase4:37193] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 10:15:08,013 DEBUG [RS:3;jenkins-hbase4:37193] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 10:15:08,013 DEBUG [RS:3;jenkins-hbase4:37193] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:08,013 DEBUG [RS:3;jenkins-hbase4:37193] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37193,1689675307802' 2023-07-18 10:15:08,013 DEBUG [RS:3;jenkins-hbase4:37193] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 10:15:08,013 DEBUG [RS:3;jenkins-hbase4:37193] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 10:15:08,014 DEBUG [RS:3;jenkins-hbase4:37193] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 10:15:08,014 INFO [RS:3;jenkins-hbase4:37193] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 10:15:08,014 INFO [RS:3;jenkins-hbase4:37193] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 10:15:08,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:08,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:08,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:08,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:08,019 DEBUG [hconnection-0x678a5151-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:15:08,020 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43772, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:15:08,024 DEBUG [hconnection-0x678a5151-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 10:15:08,026 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43474, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 10:15:08,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:08,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:08,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46153] to rsgroup master 2023-07-18 10:15:08,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:08,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36172 deadline: 1689676508030, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. 2023-07-18 10:15:08,031 WARN [Listener at localhost/44679] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:15:08,032 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:08,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:08,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:08,033 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35165, jenkins-hbase4.apache.org:37027, jenkins-hbase4.apache.org:37193, jenkins-hbase4.apache.org:40717], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:15:08,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:08,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:08,081 INFO [Listener at localhost/44679] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=560 (was 513) Potentially hanging thread: Listener at localhost/44679-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:39145 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:39145 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:2;jenkins-hbase4:40717-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data3/current/BP-774630301-172.31.14.131-1689675304930 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1648624085_17 at /127.0.0.1:35900 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 39145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@738d2c6f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1796509781-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43981 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@6fedfb9b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1250997367-2326-acceptor-0@106abe91-ServerConnector@44575355{HTTP/1.1, (http/1.1)}{0.0.0.0:37431} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@43b721e7[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:43981 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1201654323-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5d1ed8d[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: Listener at localhost/44679-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-335021329_17 at /127.0.0.1:36998 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4e3646f3-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1648624085_17 at /127.0.0.1:35894 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 34513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1648624085_17 at /127.0.0.1:48890 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365218274-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:39145 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2f4291ae-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59011@0x55a3eb8d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-335021329_17 at /127.0.0.1:35892 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:43981 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp543222535-2311-acceptor-0@2d23bb8a-ServerConnector@b0c4942{HTTP/1.1, (http/1.1)}{0.0.0.0:37757} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x58a1b5a8-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x1f28e8dd-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp2128328936-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 39145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 633923924@qtp-319751269-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41615 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x1306cf76-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp2128328936-2281-acceptor-0@3238565a-ServerConnector@166c2be4{HTTP/1.1, (http/1.1)}{0.0.0.0:33241} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-678cb6cc-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1201654323-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1732601828_17 at /127.0.0.1:36972 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:56417 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792-prefix:jenkins-hbase4.apache.org,40717,1689675306287.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675306649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792-prefix:jenkins-hbase4.apache.org,37027,1689675305953 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365218274-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 35685 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@14576aea sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:43981 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1796509781-2594 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x1f28e8dd sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1260679520.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-335021329_17 at /127.0.0.1:48862 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@12ec7eeb java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1395088326@qtp-1987689897-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Session-HouseKeeper-6f58735d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1201654323-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46153,1689675305766 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: jenkins-hbase4:37027Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:39145 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34513 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1648624085_17 at /127.0.0.1:37002 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3e5ca386 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x0d9e62ef-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x7266baae-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x238c9294-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1648624085_17 at /127.0.0.1:48876 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-76c6d96c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1796509781-2593 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1732601828_17 at /127.0.0.1:35870 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp543222535-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/40599-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1796509781-2589 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2128328936-2280 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x678a5151-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 242698050@qtp-319751269-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: M:0;jenkins-hbase4:46153 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792-prefix:jenkins-hbase4.apache.org,35165,1689675306111 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365218274-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:40717Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@1c3f7275 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792-prefix:jenkins-hbase4.apache.org,40717,1689675306287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1648624085_17 at /127.0.0.1:37016 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35165Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp543222535-2310 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42475,1689675300038 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: IPC Server idle connection scanner for port 39145 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@54e87a3c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1250997367-2322 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1796509781-2595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server idle connection scanner for port 44679 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x7266baae-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44679-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1201654323-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp543222535-2314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp543222535-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:39145 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 1 on default port 44679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:43981 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/44679.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x0d9e62ef sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1260679520.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 35685 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:37193Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 34513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data2/current/BP-774630301-172.31.14.131-1689675304930 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1c84e900-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS:0;jenkins-hbase4:37027 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1250997367-2327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1796509781-2596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@26096cfa java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1201654323-2251-acceptor-0@5879898c-ServerConnector@22526232{HTTP/1.1, (http/1.1)}{0.0.0.0:43395} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:39145 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:39145 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59011@0x55a3eb8d-SendThread(127.0.0.1:59011) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x1306cf76 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1260679520.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2128328936-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:43981 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x1bb9e2c0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@9e9cbc3 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x1bb9e2c0-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675306648 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@864b7cb java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x1f28e8dd-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-335021329_17 at /127.0.0.1:36936 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 35685 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x7266baae sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1260679520.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data4/current/BP-774630301-172.31.14.131-1689675304930 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2128328936-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data6/current/BP-774630301-172.31.14.131-1689675304930 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:37193-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@687590a3[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:43981 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_334494775_17 at /127.0.0.1:35890 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 739594865@qtp-180895302-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37539 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS:2;jenkins-hbase4:40717 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59011@0x55a3eb8d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1260679520.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x0d9e62ef-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 35685 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1250997367-2323 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2128328936-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 27233032@qtp-1987689897-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44501 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1365218274-2219 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-774630301-172.31.14.131-1689675304930 heartbeating to localhost/127.0.0.1:39145 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 35685 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@26a0f5fc java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x58a1b5a8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1201654323-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 44679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_334494775_17 at /127.0.0.1:48850 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:39145 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 34513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData-prefix:jenkins-hbase4.apache.org,46153,1689675305766 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1250997367-2328 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:35165-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:56417): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44679.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data1/current/BP-774630301-172.31.14.131-1689675304930 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp543222535-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 44679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 34513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x1bb9e2c0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1260679520.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37027 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44679-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1796509781-2592 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-774630301-172.31.14.131-1689675304930 heartbeating to localhost/127.0.0.1:39145 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2128328936-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x238c9294 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1260679520.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 270736164@qtp-888536680-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34363 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1250997367-2325 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365218274-2220-acceptor-0@370dee67-ServerConnector@700c4bda{HTTP/1.1, (http/1.1)}{0.0.0.0:40415} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(891237897) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp1201654323-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x1306cf76-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 35685 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 39145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x58a1b5a8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1260679520.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-774630301-172.31.14.131-1689675304930:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x678a5151-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 402280663@qtp-180895302-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: 1562200617@qtp-888536680-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 44679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1201654323-2250 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-774630301-172.31.14.131-1689675304930 heartbeating to localhost/127.0.0.1:39145 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1250997367-2324 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/823419104.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data5/current/BP-774630301-172.31.14.131-1689675304930 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365218274-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:35165 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 44679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1250997367-2329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp543222535-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2128328936-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1365218274-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:37193 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:39145 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp543222535-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:39145 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1732601828_17 at /127.0.0.1:48810 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_334494775_17 at /127.0.0.1:35832 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@c54cbc6 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_334494775_17 at /127.0.0.1:36992 [Receiving block BP-774630301-172.31.14.131-1689675304930:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44679-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:43981 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46153 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 0 on default port 39145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 39145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@4f298f22 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56417@0x238c9294-SendThread(127.0.0.1:56417) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1796509781-2590-acceptor-0@49f95966-ServerConnector@2a301633{HTTP/1.1, (http/1.1)}{0.0.0.0:45549} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365218274-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/40599-SendThread(127.0.0.1:59011) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:37027-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (893071991) connection to localhost/127.0.0.1:43981 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=832 (was 801) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=428 (was 426) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=2974 (was 3254) 2023-07-18 10:15:08,084 WARN [Listener at localhost/44679] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-18 10:15:08,101 INFO [Listener at localhost/44679] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=560, OpenFileDescriptor=832, MaxFileDescriptor=60000, SystemLoadAverage=428, ProcessCount=173, AvailableMemoryMB=2973 2023-07-18 10:15:08,101 WARN [Listener at localhost/44679] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-18 10:15:08,101 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-18 10:15:08,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:08,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:08,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:15:08,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:15:08,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:15:08,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:15:08,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:15:08,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:15:08,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:08,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:15:08,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:08,113 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:15:08,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:15:08,116 INFO [RS:3;jenkins-hbase4:37193] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37193%2C1689675307802, suffix=, logDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,37193,1689675307802, archiveDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs, maxLogs=32 2023-07-18 10:15:08,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:08,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:08,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:08,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:08,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:08,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:08,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46153] to rsgroup master 2023-07-18 10:15:08,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:08,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36172 deadline: 1689676508125, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. 2023-07-18 10:15:08,126 WARN [Listener at localhost/44679] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:15:08,128 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:08,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:08,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:08,129 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35165, jenkins-hbase4.apache.org:37027, jenkins-hbase4.apache.org:37193, jenkins-hbase4.apache.org:40717], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:15:08,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:08,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:08,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:08,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 10:15:08,138 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK] 2023-07-18 10:15:08,139 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK] 2023-07-18 10:15:08,139 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK] 2023-07-18 10:15:08,140 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:15:08,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-18 10:15:08,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 10:15:08,142 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:08,149 INFO [RS:3;jenkins-hbase4:37193] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/WALs/jenkins-hbase4.apache.org,37193,1689675307802/jenkins-hbase4.apache.org%2C37193%2C1689675307802.1689675308116 2023-07-18 10:15:08,150 DEBUG [RS:3;jenkins-hbase4:37193] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40947,DS-d7d4c697-1800-4fe2-8c51-9e93133f94a0,DISK], DatanodeInfoWithStorage[127.0.0.1:37329,DS-4c34fb2d-7b2d-4a52-825f-475978ce28ea,DISK], DatanodeInfoWithStorage[127.0.0.1:36533,DS-d8e51858-79c3-4a80-9e52-eae564e8c5c1,DISK]] 2023-07-18 10:15:08,150 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:08,150 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:08,153 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 10:15:08,154 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/default/t1/e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:08,155 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/default/t1/e574f5116979219f1bbe4122f9260818 empty. 2023-07-18 10:15:08,155 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/default/t1/e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:08,155 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 10:15:08,174 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-18 10:15:08,175 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => e574f5116979219f1bbe4122f9260818, NAME => 't1,,1689675308133.e574f5116979219f1bbe4122f9260818.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp 2023-07-18 10:15:08,195 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689675308133.e574f5116979219f1bbe4122f9260818.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:08,195 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing e574f5116979219f1bbe4122f9260818, disabling compactions & flushes 2023-07-18 10:15:08,195 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:08,195 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:08,195 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689675308133.e574f5116979219f1bbe4122f9260818. after waiting 0 ms 2023-07-18 10:15:08,195 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:08,195 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:08,195 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for e574f5116979219f1bbe4122f9260818: 2023-07-18 10:15:08,198 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 10:15:08,199 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689675308133.e574f5116979219f1bbe4122f9260818.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675308199"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675308199"}]},"ts":"1689675308199"} 2023-07-18 10:15:08,200 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 10:15:08,201 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 10:15:08,201 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675308201"}]},"ts":"1689675308201"} 2023-07-18 10:15:08,202 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-18 10:15:08,206 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 10:15:08,206 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 10:15:08,206 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 10:15:08,206 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 10:15:08,206 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 10:15:08,206 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 10:15:08,206 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=e574f5116979219f1bbe4122f9260818, ASSIGN}] 2023-07-18 10:15:08,207 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=e574f5116979219f1bbe4122f9260818, ASSIGN 2023-07-18 10:15:08,208 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=e574f5116979219f1bbe4122f9260818, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37193,1689675307802; forceNewPlan=false, retain=false 2023-07-18 10:15:08,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 10:15:08,358 INFO [jenkins-hbase4:46153] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 10:15:08,360 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e574f5116979219f1bbe4122f9260818, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:08,360 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689675308133.e574f5116979219f1bbe4122f9260818.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675308360"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675308360"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675308360"}]},"ts":"1689675308360"} 2023-07-18 10:15:08,361 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure e574f5116979219f1bbe4122f9260818, server=jenkins-hbase4.apache.org,37193,1689675307802}] 2023-07-18 10:15:08,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 10:15:08,514 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:08,515 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 10:15:08,516 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47452, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 10:15:08,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:08,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e574f5116979219f1bbe4122f9260818, NAME => 't1,,1689675308133.e574f5116979219f1bbe4122f9260818.', STARTKEY => '', ENDKEY => ''} 2023-07-18 10:15:08,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:08,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689675308133.e574f5116979219f1bbe4122f9260818.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 10:15:08,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:08,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:08,521 INFO [StoreOpener-e574f5116979219f1bbe4122f9260818-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:08,522 DEBUG [StoreOpener-e574f5116979219f1bbe4122f9260818-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/default/t1/e574f5116979219f1bbe4122f9260818/cf1 2023-07-18 10:15:08,522 DEBUG [StoreOpener-e574f5116979219f1bbe4122f9260818-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/default/t1/e574f5116979219f1bbe4122f9260818/cf1 2023-07-18 10:15:08,523 INFO [StoreOpener-e574f5116979219f1bbe4122f9260818-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e574f5116979219f1bbe4122f9260818 columnFamilyName cf1 2023-07-18 10:15:08,523 INFO [StoreOpener-e574f5116979219f1bbe4122f9260818-1] regionserver.HStore(310): Store=e574f5116979219f1bbe4122f9260818/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 10:15:08,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/default/t1/e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:08,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/default/t1/e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:08,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:08,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/default/t1/e574f5116979219f1bbe4122f9260818/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 10:15:08,530 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e574f5116979219f1bbe4122f9260818; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12065624160, jitterRate=0.12369881570339203}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 10:15:08,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e574f5116979219f1bbe4122f9260818: 2023-07-18 10:15:08,531 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689675308133.e574f5116979219f1bbe4122f9260818., pid=14, masterSystemTime=1689675308514 2023-07-18 10:15:08,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:08,535 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:08,535 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e574f5116979219f1bbe4122f9260818, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:08,535 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689675308133.e574f5116979219f1bbe4122f9260818.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675308535"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689675308535"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689675308535"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689675308535"}]},"ts":"1689675308535"} 2023-07-18 10:15:08,538 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-18 10:15:08,538 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure e574f5116979219f1bbe4122f9260818, server=jenkins-hbase4.apache.org,37193,1689675307802 in 175 msec 2023-07-18 10:15:08,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 10:15:08,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=e574f5116979219f1bbe4122f9260818, ASSIGN in 332 msec 2023-07-18 10:15:08,540 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 10:15:08,540 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675308540"}]},"ts":"1689675308540"} 2023-07-18 10:15:08,543 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-18 10:15:08,545 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 10:15:08,546 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 412 msec 2023-07-18 10:15:08,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 10:15:08,753 INFO [Listener at localhost/44679] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-18 10:15:08,753 DEBUG [Listener at localhost/44679] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-18 10:15:08,753 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:08,755 INFO [Listener at localhost/44679] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-18 10:15:08,755 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:08,755 INFO [Listener at localhost/44679] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-18 10:15:08,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 10:15:08,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 10:15:08,759 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 10:15:08,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-18 10:15:08,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 172.31.14.131:36172 deadline: 1689675368756, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-18 10:15:08,761 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:08,762 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-18 10:15:08,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:08,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:08,863 INFO [Listener at localhost/44679] client.HBaseAdmin$15(890): Started disable of t1 2023-07-18 10:15:08,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-18 10:15:08,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-18 10:15:08,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 10:15:08,867 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675308867"}]},"ts":"1689675308867"} 2023-07-18 10:15:08,868 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-18 10:15:08,870 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-18 10:15:08,871 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=e574f5116979219f1bbe4122f9260818, UNASSIGN}] 2023-07-18 10:15:08,871 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=e574f5116979219f1bbe4122f9260818, UNASSIGN 2023-07-18 10:15:08,872 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e574f5116979219f1bbe4122f9260818, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:08,872 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689675308133.e574f5116979219f1bbe4122f9260818.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675308872"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689675308872"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689675308872"}]},"ts":"1689675308872"} 2023-07-18 10:15:08,873 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure e574f5116979219f1bbe4122f9260818, server=jenkins-hbase4.apache.org,37193,1689675307802}] 2023-07-18 10:15:08,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 10:15:09,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:09,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e574f5116979219f1bbe4122f9260818, disabling compactions & flushes 2023-07-18 10:15:09,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:09,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:09,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689675308133.e574f5116979219f1bbe4122f9260818. after waiting 0 ms 2023-07-18 10:15:09,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:09,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/default/t1/e574f5116979219f1bbe4122f9260818/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 10:15:09,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689675308133.e574f5116979219f1bbe4122f9260818. 2023-07-18 10:15:09,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e574f5116979219f1bbe4122f9260818: 2023-07-18 10:15:09,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:09,031 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e574f5116979219f1bbe4122f9260818, regionState=CLOSED 2023-07-18 10:15:09,031 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689675308133.e574f5116979219f1bbe4122f9260818.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689675309031"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689675309031"}]},"ts":"1689675309031"} 2023-07-18 10:15:09,033 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 10:15:09,033 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure e574f5116979219f1bbe4122f9260818, server=jenkins-hbase4.apache.org,37193,1689675307802 in 159 msec 2023-07-18 10:15:09,035 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-18 10:15:09,035 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=e574f5116979219f1bbe4122f9260818, UNASSIGN in 162 msec 2023-07-18 10:15:09,035 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689675309035"}]},"ts":"1689675309035"} 2023-07-18 10:15:09,036 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-18 10:15:09,038 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-18 10:15:09,040 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 174 msec 2023-07-18 10:15:09,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 10:15:09,169 INFO [Listener at localhost/44679] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-18 10:15:09,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-18 10:15:09,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-18 10:15:09,172 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 10:15:09,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-18 10:15:09,173 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-18 10:15:09,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:09,177 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/default/t1/e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:09,178 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/default/t1/e574f5116979219f1bbe4122f9260818/cf1, FileablePath, hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/default/t1/e574f5116979219f1bbe4122f9260818/recovered.edits] 2023-07-18 10:15:09,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 10:15:09,184 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/default/t1/e574f5116979219f1bbe4122f9260818/recovered.edits/4.seqid to hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/archive/data/default/t1/e574f5116979219f1bbe4122f9260818/recovered.edits/4.seqid 2023-07-18 10:15:09,185 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/.tmp/data/default/t1/e574f5116979219f1bbe4122f9260818 2023-07-18 10:15:09,185 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 10:15:09,187 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-18 10:15:09,188 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-18 10:15:09,190 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-18 10:15:09,191 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-18 10:15:09,191 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-18 10:15:09,191 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689675308133.e574f5116979219f1bbe4122f9260818.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689675309191"}]},"ts":"9223372036854775807"} 2023-07-18 10:15:09,192 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 10:15:09,192 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e574f5116979219f1bbe4122f9260818, NAME => 't1,,1689675308133.e574f5116979219f1bbe4122f9260818.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 10:15:09,192 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-18 10:15:09,192 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689675309192"}]},"ts":"9223372036854775807"} 2023-07-18 10:15:09,193 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-18 10:15:09,195 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 10:15:09,196 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-18 10:15:09,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 10:15:09,280 INFO [Listener at localhost/44679] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-18 10:15:09,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:15:09,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:15:09,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:15:09,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:15:09,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:15:09,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:15:09,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:15:09,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:09,297 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:15:09,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:15:09,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:09,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:09,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46153] to rsgroup master 2023-07-18 10:15:09,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:09,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36172 deadline: 1689676509308, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. 2023-07-18 10:15:09,309 WARN [Listener at localhost/44679] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:15:09,312 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:09,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,313 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35165, jenkins-hbase4.apache.org:37027, jenkins-hbase4.apache.org:37193, jenkins-hbase4.apache.org:40717], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:15:09,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:09,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:09,335 INFO [Listener at localhost/44679] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=571 (was 560) - Thread LEAK? -, OpenFileDescriptor=839 (was 832) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=428 (was 428), ProcessCount=173 (was 173), AvailableMemoryMB=2960 (was 2973) 2023-07-18 10:15:09,336 WARN [Listener at localhost/44679] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-18 10:15:09,354 INFO [Listener at localhost/44679] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=428, ProcessCount=173, AvailableMemoryMB=2960 2023-07-18 10:15:09,354 WARN [Listener at localhost/44679] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-18 10:15:09,354 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-18 10:15:09,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:15:09,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:15:09,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:15:09,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:15:09,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:15:09,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:15:09,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:15:09,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:09,370 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:15:09,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:15:09,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:09,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:09,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46153] to rsgroup master 2023-07-18 10:15:09,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:09,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36172 deadline: 1689676509381, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. 2023-07-18 10:15:09,381 WARN [Listener at localhost/44679] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:15:09,383 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:09,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,384 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35165, jenkins-hbase4.apache.org:37027, jenkins-hbase4.apache.org:37193, jenkins-hbase4.apache.org:40717], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:15:09,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:09,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:09,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 10:15:09,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:15:09,386 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-18 10:15:09,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 10:15:09,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 10:15:09,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:15:09,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:15:09,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:15:09,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:15:09,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:15:09,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:15:09,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:15:09,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:09,407 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:15:09,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:15:09,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:09,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:09,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46153] to rsgroup master 2023-07-18 10:15:09,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:09,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36172 deadline: 1689676509422, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. 2023-07-18 10:15:09,423 WARN [Listener at localhost/44679] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:15:09,425 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:09,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,426 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35165, jenkins-hbase4.apache.org:37027, jenkins-hbase4.apache.org:37193, jenkins-hbase4.apache.org:40717], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:15:09,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:09,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:09,446 INFO [Listener at localhost/44679] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573 (was 571) - Thread LEAK? -, OpenFileDescriptor=839 (was 839), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=428 (was 428), ProcessCount=173 (was 173), AvailableMemoryMB=2963 (was 2960) - AvailableMemoryMB LEAK? - 2023-07-18 10:15:09,447 WARN [Listener at localhost/44679] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 10:15:09,467 INFO [Listener at localhost/44679] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=428, ProcessCount=173, AvailableMemoryMB=2963 2023-07-18 10:15:09,467 WARN [Listener at localhost/44679] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 10:15:09,467 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-18 10:15:09,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:15:09,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:15:09,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:15:09,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:15:09,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:15:09,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:15:09,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:15:09,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:09,482 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:15:09,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:15:09,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:09,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:09,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46153] to rsgroup master 2023-07-18 10:15:09,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:09,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36172 deadline: 1689676509491, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. 2023-07-18 10:15:09,492 WARN [Listener at localhost/44679] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:15:09,494 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:09,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,494 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35165, jenkins-hbase4.apache.org:37027, jenkins-hbase4.apache.org:37193, jenkins-hbase4.apache.org:40717], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:15:09,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:09,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:09,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:15:09,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:15:09,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:15:09,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:15:09,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:15:09,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:15:09,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:15:09,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:09,510 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:15:09,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:15:09,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:09,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:09,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46153] to rsgroup master 2023-07-18 10:15:09,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:09,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36172 deadline: 1689676509519, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. 2023-07-18 10:15:09,520 WARN [Listener at localhost/44679] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:15:09,522 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:09,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,523 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35165, jenkins-hbase4.apache.org:37027, jenkins-hbase4.apache.org:37193, jenkins-hbase4.apache.org:40717], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:15:09,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:09,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:09,542 INFO [Listener at localhost/44679] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=574 (was 573) - Thread LEAK? -, OpenFileDescriptor=839 (was 839), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=428 (was 428), ProcessCount=173 (was 173), AvailableMemoryMB=2963 (was 2963) 2023-07-18 10:15:09,542 WARN [Listener at localhost/44679] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-18 10:15:09,561 INFO [Listener at localhost/44679] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=428, ProcessCount=173, AvailableMemoryMB=2962 2023-07-18 10:15:09,562 WARN [Listener at localhost/44679] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-18 10:15:09,562 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-18 10:15:09,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:15:09,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:15:09,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:15:09,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:15:09,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:15:09,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:15:09,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:15:09,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:09,576 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:15:09,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:15:09,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:09,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:09,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46153] to rsgroup master 2023-07-18 10:15:09,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:09,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36172 deadline: 1689676509585, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. 2023-07-18 10:15:09,586 WARN [Listener at localhost/44679] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:15:09,588 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:09,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,588 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35165, jenkins-hbase4.apache.org:37027, jenkins-hbase4.apache.org:37193, jenkins-hbase4.apache.org:40717], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:15:09,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:09,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:09,589 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-18 10:15:09,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-18 10:15:09,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 10:15:09,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 10:15:09,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:09,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 10:15:09,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-18 10:15:09,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 10:15:09,609 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:09,612 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 11 msec 2023-07-18 10:15:09,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 10:15:09,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 10:15:09,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:09,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:36172 deadline: 1689676509707, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-18 10:15:09,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 10:15:09,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-18 10:15:09,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 10:15:09,732 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 10:15:09,734 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 19 msec 2023-07-18 10:15:09,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 10:15:09,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-18 10:15:09,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 10:15:09,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 10:15:09,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 10:15:09,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:09,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-18 10:15:09,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 10:15:09,847 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 10:15:09,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 10:15:09,850 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 10:15:09,852 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 10:15:09,853 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 10:15:09,853 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 10:15:09,855 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 10:15:09,860 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 10:15:09,861 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 17 msec 2023-07-18 10:15:09,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 10:15:09,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 10:15:09,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 10:15:09,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 10:15:09,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:09,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 10:15:09,959 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 10:15:09,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:15:09,959 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 10:15:09,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 10:15:09,959 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 10:15:09,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:09,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:36172 deadline: 1689675369961, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-18 10:15:09,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:15:09,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:15:09,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:15:09,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:15:09,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:15:09,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-18 10:15:09,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 10:15:09,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:09,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 10:15:09,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 10:15:09,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 10:15:09,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 10:15:09,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 10:15:09,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 10:15:09,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 10:15:09,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 10:15:09,978 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 10:15:09,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 10:15:09,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 10:15:09,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 10:15:09,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 10:15:09,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 10:15:09,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46153] to rsgroup master 2023-07-18 10:15:09,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 10:15:09,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36172 deadline: 1689676509986, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. 2023-07-18 10:15:09,986 WARN [Listener at localhost/44679] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46153 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 10:15:09,988 INFO [Listener at localhost/44679] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 10:15:09,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 10:15:09,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 10:15:09,989 INFO [Listener at localhost/44679] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35165, jenkins-hbase4.apache.org:37027, jenkins-hbase4.apache.org:37193, jenkins-hbase4.apache.org:40717], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 10:15:09,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 10:15:09,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46153] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 10:15:10,007 INFO [Listener at localhost/44679] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574 (was 574), OpenFileDescriptor=839 (was 839), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=428 (was 428), ProcessCount=173 (was 173), AvailableMemoryMB=2941 (was 2962) 2023-07-18 10:15:10,007 WARN [Listener at localhost/44679] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-18 10:15:10,008 INFO [Listener at localhost/44679] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 10:15:10,008 INFO [Listener at localhost/44679] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 10:15:10,008 DEBUG [Listener at localhost/44679] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x58a1b5a8 to 127.0.0.1:56417 2023-07-18 10:15:10,008 DEBUG [Listener at localhost/44679] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,008 DEBUG [Listener at localhost/44679] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 10:15:10,008 DEBUG [Listener at localhost/44679] util.JVMClusterUtil(257): Found active master hash=1771130930, stopped=false 2023-07-18 10:15:10,008 DEBUG [Listener at localhost/44679] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 10:15:10,008 DEBUG [Listener at localhost/44679] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 10:15:10,008 INFO [Listener at localhost/44679] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:10,011 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:10,011 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:10,011 INFO [Listener at localhost/44679] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 10:15:10,011 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:10,011 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:10,011 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 10:15:10,011 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:10,011 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:10,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:10,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:10,012 DEBUG [Listener at localhost/44679] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0d9e62ef to 127.0.0.1:56417 2023-07-18 10:15:10,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:10,012 DEBUG [Listener at localhost/44679] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 10:15:10,012 INFO [Listener at localhost/44679] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37027,1689675305953' ***** 2023-07-18 10:15:10,012 INFO [Listener at localhost/44679] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:15:10,012 INFO [Listener at localhost/44679] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35165,1689675306111' ***** 2023-07-18 10:15:10,012 INFO [Listener at localhost/44679] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:15:10,012 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:15:10,012 INFO [Listener at localhost/44679] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40717,1689675306287' ***** 2023-07-18 10:15:10,012 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:15:10,012 INFO [Listener at localhost/44679] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:15:10,014 INFO [Listener at localhost/44679] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37193,1689675307802' ***** 2023-07-18 10:15:10,014 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:15:10,014 INFO [Listener at localhost/44679] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 10:15:10,017 INFO [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:15:10,020 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:15:10,020 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:10,021 INFO [RS:1;jenkins-hbase4:35165] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4bc8a2d7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:10,021 INFO [RS:2;jenkins-hbase4:40717] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4316d895{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:10,021 INFO [RS:0;jenkins-hbase4:37027] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@58e40637{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:10,022 INFO [RS:1;jenkins-hbase4:35165] server.AbstractConnector(383): Stopped ServerConnector@166c2be4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:10,022 INFO [RS:1;jenkins-hbase4:35165] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:15:10,022 INFO [RS:3;jenkins-hbase4:37193] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3da779a5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 10:15:10,022 INFO [RS:2;jenkins-hbase4:40717] server.AbstractConnector(383): Stopped ServerConnector@b0c4942{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:10,022 INFO [RS:1;jenkins-hbase4:35165] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@522757e5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:15:10,022 INFO [RS:0;jenkins-hbase4:37027] server.AbstractConnector(383): Stopped ServerConnector@22526232{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:10,023 INFO [RS:3;jenkins-hbase4:37193] server.AbstractConnector(383): Stopped ServerConnector@2a301633{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:10,022 INFO [RS:2;jenkins-hbase4:40717] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:15:10,023 INFO [RS:3;jenkins-hbase4:37193] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:15:10,023 INFO [RS:0;jenkins-hbase4:37027] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:15:10,023 INFO [RS:1;jenkins-hbase4:35165] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5645df0e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,STOPPED} 2023-07-18 10:15:10,025 INFO [RS:0;jenkins-hbase4:37027] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b1b3fe0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:15:10,025 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-18 10:15:10,026 INFO [RS:0;jenkins-hbase4:37027] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3a52efb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,STOPPED} 2023-07-18 10:15:10,025 INFO [RS:3;jenkins-hbase4:37193] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4858353d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:15:10,025 INFO [RS:2;jenkins-hbase4:40717] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@505a3c39{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:15:10,027 INFO [RS:3;jenkins-hbase4:37193] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@38fe434a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,STOPPED} 2023-07-18 10:15:10,026 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-18 10:15:10,028 INFO [RS:1;jenkins-hbase4:35165] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:15:10,028 INFO [RS:2;jenkins-hbase4:40717] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@36964252{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,STOPPED} 2023-07-18 10:15:10,028 INFO [RS:0;jenkins-hbase4:37027] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:15:10,028 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:15:10,028 INFO [RS:0;jenkins-hbase4:37027] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:15:10,028 INFO [RS:3;jenkins-hbase4:37193] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:15:10,028 INFO [RS:3;jenkins-hbase4:37193] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:15:10,028 INFO [RS:3;jenkins-hbase4:37193] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:15:10,028 INFO [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:10,028 DEBUG [RS:3;jenkins-hbase4:37193] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1bb9e2c0 to 127.0.0.1:56417 2023-07-18 10:15:10,029 DEBUG [RS:3;jenkins-hbase4:37193] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,029 INFO [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37193,1689675307802; all regions closed. 2023-07-18 10:15:10,028 INFO [RS:1;jenkins-hbase4:35165] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:15:10,028 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:15:10,028 INFO [RS:0;jenkins-hbase4:37027] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:15:10,029 INFO [RS:1;jenkins-hbase4:35165] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:15:10,029 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(3305): Received CLOSE for e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:10,029 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(3305): Received CLOSE for 61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:10,029 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:10,030 DEBUG [RS:1;jenkins-hbase4:35165] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7266baae to 127.0.0.1:56417 2023-07-18 10:15:10,030 DEBUG [RS:1;jenkins-hbase4:35165] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 61b3dc7a57f4e33b37513ac05598296c, disabling compactions & flushes 2023-07-18 10:15:10,030 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 10:15:10,030 DEBUG [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1478): Online Regions={61b3dc7a57f4e33b37513ac05598296c=hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c.} 2023-07-18 10:15:10,030 INFO [RS:2;jenkins-hbase4:40717] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 10:15:10,030 DEBUG [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1504): Waiting on 61b3dc7a57f4e33b37513ac05598296c 2023-07-18 10:15:10,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:10,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:10,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. after waiting 0 ms 2023-07-18 10:15:10,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:10,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 61b3dc7a57f4e33b37513ac05598296c 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-18 10:15:10,030 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:10,030 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 10:15:10,030 DEBUG [RS:0;jenkins-hbase4:37027] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1306cf76 to 127.0.0.1:56417 2023-07-18 10:15:10,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e516bbec513a2690d38980a1e6d81fa8, disabling compactions & flushes 2023-07-18 10:15:10,031 DEBUG [RS:0;jenkins-hbase4:37027] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,031 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 10:15:10,031 DEBUG [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1478): Online Regions={e516bbec513a2690d38980a1e6d81fa8=hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8.} 2023-07-18 10:15:10,031 DEBUG [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1504): Waiting on e516bbec513a2690d38980a1e6d81fa8 2023-07-18 10:15:10,031 INFO [RS:2;jenkins-hbase4:40717] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 10:15:10,031 INFO [RS:2;jenkins-hbase4:40717] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 10:15:10,031 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:10,031 DEBUG [RS:2;jenkins-hbase4:40717] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x238c9294 to 127.0.0.1:56417 2023-07-18 10:15:10,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:10,031 DEBUG [RS:2;jenkins-hbase4:40717] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:10,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. after waiting 0 ms 2023-07-18 10:15:10,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:10,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e516bbec513a2690d38980a1e6d81fa8 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-18 10:15:10,031 INFO [RS:2;jenkins-hbase4:40717] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:15:10,034 INFO [RS:2;jenkins-hbase4:40717] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:15:10,034 INFO [RS:2;jenkins-hbase4:40717] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:15:10,034 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 10:15:10,034 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 10:15:10,034 DEBUG [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-18 10:15:10,034 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 10:15:10,034 DEBUG [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-18 10:15:10,034 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 10:15:10,034 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 10:15:10,034 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 10:15:10,034 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 10:15:10,034 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-18 10:15:10,037 DEBUG [RS:3;jenkins-hbase4:37193] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs 2023-07-18 10:15:10,037 INFO [RS:3;jenkins-hbase4:37193] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37193%2C1689675307802:(num 1689675308116) 2023-07-18 10:15:10,037 DEBUG [RS:3;jenkins-hbase4:37193] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,037 INFO [RS:3;jenkins-hbase4:37193] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:10,039 INFO [RS:3;jenkins-hbase4:37193] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 10:15:10,039 INFO [RS:3;jenkins-hbase4:37193] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:15:10,039 INFO [RS:3;jenkins-hbase4:37193] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:15:10,039 INFO [RS:3;jenkins-hbase4:37193] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:15:10,039 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:15:10,040 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:10,044 INFO [RS:3;jenkins-hbase4:37193] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37193 2023-07-18 10:15:10,051 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:10,051 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:10,051 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:10,051 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:10,051 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:10,051 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:10,051 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:10,051 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37193,1689675307802 2023-07-18 10:15:10,051 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:10,051 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37193,1689675307802] 2023-07-18 10:15:10,051 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37193,1689675307802; numProcessing=1 2023-07-18 10:15:10,052 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:10,052 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37193,1689675307802 already deleted, retry=false 2023-07-18 10:15:10,053 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37193,1689675307802 expired; onlineServers=3 2023-07-18 10:15:10,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8/.tmp/info/d4a8f1de3b59441585675054d61f1366 2023-07-18 10:15:10,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c/.tmp/m/1a491df5b7f74ed7b1ef5a9ab8e00610 2023-07-18 10:15:10,067 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/.tmp/info/b552d10080414a5cbf1fd24dccacb2e4 2023-07-18 10:15:10,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d4a8f1de3b59441585675054d61f1366 2023-07-18 10:15:10,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8/.tmp/info/d4a8f1de3b59441585675054d61f1366 as hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8/info/d4a8f1de3b59441585675054d61f1366 2023-07-18 10:15:10,072 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b552d10080414a5cbf1fd24dccacb2e4 2023-07-18 10:15:10,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1a491df5b7f74ed7b1ef5a9ab8e00610 2023-07-18 10:15:10,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c/.tmp/m/1a491df5b7f74ed7b1ef5a9ab8e00610 as hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c/m/1a491df5b7f74ed7b1ef5a9ab8e00610 2023-07-18 10:15:10,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d4a8f1de3b59441585675054d61f1366 2023-07-18 10:15:10,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8/info/d4a8f1de3b59441585675054d61f1366, entries=3, sequenceid=9, filesize=5.0 K 2023-07-18 10:15:10,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for e516bbec513a2690d38980a1e6d81fa8 in 45ms, sequenceid=9, compaction requested=false 2023-07-18 10:15:10,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1a491df5b7f74ed7b1ef5a9ab8e00610 2023-07-18 10:15:10,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c/m/1a491df5b7f74ed7b1ef5a9ab8e00610, entries=12, sequenceid=29, filesize=5.4 K 2023-07-18 10:15:10,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 61b3dc7a57f4e33b37513ac05598296c in 54ms, sequenceid=29, compaction requested=false 2023-07-18 10:15:10,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/namespace/e516bbec513a2690d38980a1e6d81fa8/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-18 10:15:10,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:10,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e516bbec513a2690d38980a1e6d81fa8: 2023-07-18 10:15:10,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689675307191.e516bbec513a2690d38980a1e6d81fa8. 2023-07-18 10:15:10,088 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/.tmp/rep_barrier/79de9e878e784bd2977f4d7dc6802446 2023-07-18 10:15:10,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/rsgroup/61b3dc7a57f4e33b37513ac05598296c/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-18 10:15:10,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:15:10,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:10,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 61b3dc7a57f4e33b37513ac05598296c: 2023-07-18 10:15:10,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689675307228.61b3dc7a57f4e33b37513ac05598296c. 2023-07-18 10:15:10,093 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 79de9e878e784bd2977f4d7dc6802446 2023-07-18 10:15:10,095 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:10,101 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/.tmp/table/22edab96e2d34075ab1cccbdf5298ff3 2023-07-18 10:15:10,105 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 22edab96e2d34075ab1cccbdf5298ff3 2023-07-18 10:15:10,106 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/.tmp/info/b552d10080414a5cbf1fd24dccacb2e4 as hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/info/b552d10080414a5cbf1fd24dccacb2e4 2023-07-18 10:15:10,111 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b552d10080414a5cbf1fd24dccacb2e4 2023-07-18 10:15:10,111 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/info/b552d10080414a5cbf1fd24dccacb2e4, entries=22, sequenceid=26, filesize=7.3 K 2023-07-18 10:15:10,112 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/.tmp/rep_barrier/79de9e878e784bd2977f4d7dc6802446 as hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/rep_barrier/79de9e878e784bd2977f4d7dc6802446 2023-07-18 10:15:10,116 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 79de9e878e784bd2977f4d7dc6802446 2023-07-18 10:15:10,116 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/rep_barrier/79de9e878e784bd2977f4d7dc6802446, entries=1, sequenceid=26, filesize=4.9 K 2023-07-18 10:15:10,117 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/.tmp/table/22edab96e2d34075ab1cccbdf5298ff3 as hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/table/22edab96e2d34075ab1cccbdf5298ff3 2023-07-18 10:15:10,121 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 22edab96e2d34075ab1cccbdf5298ff3 2023-07-18 10:15:10,121 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/table/22edab96e2d34075ab1cccbdf5298ff3, entries=6, sequenceid=26, filesize=5.1 K 2023-07-18 10:15:10,122 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 88ms, sequenceid=26, compaction requested=false 2023-07-18 10:15:10,129 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-18 10:15:10,130 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 10:15:10,130 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 10:15:10,130 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 10:15:10,130 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 10:15:10,211 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:10,211 INFO [RS:3;jenkins-hbase4:37193] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37193,1689675307802; zookeeper connection closed. 2023-07-18 10:15:10,211 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37193-0x10177ed9611000b, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:10,214 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5639aa36] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5639aa36 2023-07-18 10:15:10,230 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35165,1689675306111; all regions closed. 2023-07-18 10:15:10,231 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37027,1689675305953; all regions closed. 2023-07-18 10:15:10,234 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40717,1689675306287; all regions closed. 2023-07-18 10:15:10,241 DEBUG [RS:0;jenkins-hbase4:37027] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs 2023-07-18 10:15:10,242 INFO [RS:0;jenkins-hbase4:37027] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37027%2C1689675305953:(num 1689675306873) 2023-07-18 10:15:10,242 DEBUG [RS:1;jenkins-hbase4:35165] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs 2023-07-18 10:15:10,242 INFO [RS:1;jenkins-hbase4:35165] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35165%2C1689675306111:(num 1689675306867) 2023-07-18 10:15:10,242 DEBUG [RS:1;jenkins-hbase4:35165] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,242 DEBUG [RS:0;jenkins-hbase4:37027] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,242 INFO [RS:1;jenkins-hbase4:35165] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:10,242 INFO [RS:0;jenkins-hbase4:37027] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:10,242 INFO [RS:0;jenkins-hbase4:37027] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 10:15:10,242 INFO [RS:1;jenkins-hbase4:35165] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 10:15:10,242 INFO [RS:1;jenkins-hbase4:35165] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:15:10,242 INFO [RS:1;jenkins-hbase4:35165] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:15:10,242 INFO [RS:0;jenkins-hbase4:37027] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 10:15:10,242 INFO [RS:0;jenkins-hbase4:37027] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 10:15:10,243 INFO [RS:0;jenkins-hbase4:37027] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:15:10,242 INFO [RS:1;jenkins-hbase4:35165] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 10:15:10,242 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:15:10,242 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:15:10,244 INFO [RS:0;jenkins-hbase4:37027] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37027 2023-07-18 10:15:10,245 INFO [RS:1;jenkins-hbase4:35165] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35165 2023-07-18 10:15:10,247 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:10,247 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:10,248 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:10,248 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37027,1689675305953 2023-07-18 10:15:10,248 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:10,248 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:10,248 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37027,1689675305953] 2023-07-18 10:15:10,248 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35165,1689675306111 2023-07-18 10:15:10,248 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37027,1689675305953; numProcessing=2 2023-07-18 10:15:10,250 DEBUG [RS:2;jenkins-hbase4:40717] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs 2023-07-18 10:15:10,250 INFO [RS:2;jenkins-hbase4:40717] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40717%2C1689675306287.meta:.meta(num 1689675307083) 2023-07-18 10:15:10,250 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37027,1689675305953 already deleted, retry=false 2023-07-18 10:15:10,250 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37027,1689675305953 expired; onlineServers=2 2023-07-18 10:15:10,250 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35165,1689675306111] 2023-07-18 10:15:10,251 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35165,1689675306111; numProcessing=3 2023-07-18 10:15:10,252 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35165,1689675306111 already deleted, retry=false 2023-07-18 10:15:10,252 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35165,1689675306111 expired; onlineServers=1 2023-07-18 10:15:10,257 DEBUG [RS:2;jenkins-hbase4:40717] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/oldWALs 2023-07-18 10:15:10,257 INFO [RS:2;jenkins-hbase4:40717] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40717%2C1689675306287:(num 1689675306891) 2023-07-18 10:15:10,257 DEBUG [RS:2;jenkins-hbase4:40717] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,257 INFO [RS:2;jenkins-hbase4:40717] regionserver.LeaseManager(133): Closed leases 2023-07-18 10:15:10,257 INFO [RS:2;jenkins-hbase4:40717] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 10:15:10,257 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:15:10,258 INFO [RS:2;jenkins-hbase4:40717] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40717 2023-07-18 10:15:10,260 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40717,1689675306287 2023-07-18 10:15:10,260 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 10:15:10,266 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40717,1689675306287] 2023-07-18 10:15:10,267 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40717,1689675306287; numProcessing=4 2023-07-18 10:15:10,268 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40717,1689675306287 already deleted, retry=false 2023-07-18 10:15:10,268 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40717,1689675306287 expired; onlineServers=0 2023-07-18 10:15:10,268 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46153,1689675305766' ***** 2023-07-18 10:15:10,268 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 10:15:10,269 DEBUG [M:0;jenkins-hbase4:46153] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6daed9d8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 10:15:10,269 INFO [M:0;jenkins-hbase4:46153] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 10:15:10,271 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 10:15:10,271 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 10:15:10,272 INFO [M:0;jenkins-hbase4:46153] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1d66a142{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 10:15:10,272 INFO [M:0;jenkins-hbase4:46153] server.AbstractConnector(383): Stopped ServerConnector@700c4bda{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:10,272 INFO [M:0;jenkins-hbase4:46153] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 10:15:10,273 INFO [M:0;jenkins-hbase4:46153] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@936f509{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 10:15:10,273 INFO [M:0;jenkins-hbase4:46153] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d867386{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/hadoop.log.dir/,STOPPED} 2023-07-18 10:15:10,274 INFO [M:0;jenkins-hbase4:46153] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46153,1689675305766 2023-07-18 10:15:10,274 INFO [M:0;jenkins-hbase4:46153] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46153,1689675305766; all regions closed. 2023-07-18 10:15:10,274 DEBUG [M:0;jenkins-hbase4:46153] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 10:15:10,274 INFO [M:0;jenkins-hbase4:46153] master.HMaster(1491): Stopping master jetty server 2023-07-18 10:15:10,274 INFO [M:0;jenkins-hbase4:46153] server.AbstractConnector(383): Stopped ServerConnector@44575355{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 10:15:10,275 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 10:15:10,275 DEBUG [M:0;jenkins-hbase4:46153] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 10:15:10,275 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 10:15:10,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675306648] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689675306648,5,FailOnTimeoutGroup] 2023-07-18 10:15:10,275 DEBUG [M:0;jenkins-hbase4:46153] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 10:15:10,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675306649] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689675306649,5,FailOnTimeoutGroup] 2023-07-18 10:15:10,275 INFO [M:0;jenkins-hbase4:46153] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 10:15:10,275 INFO [M:0;jenkins-hbase4:46153] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 10:15:10,275 INFO [M:0;jenkins-hbase4:46153] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 10:15:10,275 DEBUG [M:0;jenkins-hbase4:46153] master.HMaster(1512): Stopping service threads 2023-07-18 10:15:10,275 INFO [M:0;jenkins-hbase4:46153] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 10:15:10,275 ERROR [M:0;jenkins-hbase4:46153] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 10:15:10,276 INFO [M:0;jenkins-hbase4:46153] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 10:15:10,276 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 10:15:10,276 DEBUG [M:0;jenkins-hbase4:46153] zookeeper.ZKUtil(398): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 10:15:10,276 WARN [M:0;jenkins-hbase4:46153] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 10:15:10,276 INFO [M:0;jenkins-hbase4:46153] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 10:15:10,276 INFO [M:0;jenkins-hbase4:46153] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 10:15:10,276 DEBUG [M:0;jenkins-hbase4:46153] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 10:15:10,276 INFO [M:0;jenkins-hbase4:46153] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:10,276 DEBUG [M:0;jenkins-hbase4:46153] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:10,276 DEBUG [M:0;jenkins-hbase4:46153] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 10:15:10,276 DEBUG [M:0;jenkins-hbase4:46153] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:10,277 INFO [M:0;jenkins-hbase4:46153] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.17 KB heapSize=90.62 KB 2023-07-18 10:15:10,297 INFO [M:0;jenkins-hbase4:46153] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.17 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3c3002073aab41478152fe01b999dd83 2023-07-18 10:15:10,303 DEBUG [M:0;jenkins-hbase4:46153] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3c3002073aab41478152fe01b999dd83 as hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3c3002073aab41478152fe01b999dd83 2023-07-18 10:15:10,309 INFO [M:0;jenkins-hbase4:46153] regionserver.HStore(1080): Added hdfs://localhost:39145/user/jenkins/test-data/86feece2-b44a-afc8-bb92-616a38212792/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3c3002073aab41478152fe01b999dd83, entries=22, sequenceid=175, filesize=11.1 K 2023-07-18 10:15:10,310 INFO [M:0;jenkins-hbase4:46153] regionserver.HRegion(2948): Finished flush of dataSize ~76.17 KB/78001, heapSize ~90.60 KB/92776, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 33ms, sequenceid=175, compaction requested=false 2023-07-18 10:15:10,319 INFO [M:0;jenkins-hbase4:46153] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 10:15:10,319 DEBUG [M:0;jenkins-hbase4:46153] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 10:15:10,323 INFO [M:0;jenkins-hbase4:46153] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 10:15:10,323 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 10:15:10,324 INFO [M:0;jenkins-hbase4:46153] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46153 2023-07-18 10:15:10,326 DEBUG [M:0;jenkins-hbase4:46153] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46153,1689675305766 already deleted, retry=false 2023-07-18 10:15:10,812 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:10,812 INFO [M:0;jenkins-hbase4:46153] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46153,1689675305766; zookeeper connection closed. 2023-07-18 10:15:10,812 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): master:46153-0x10177ed96110000, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:10,912 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:10,912 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:40717-0x10177ed96110003, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:10,913 INFO [RS:2;jenkins-hbase4:40717] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40717,1689675306287; zookeeper connection closed. 2023-07-18 10:15:10,914 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@53c40f83] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@53c40f83 2023-07-18 10:15:11,013 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:11,013 INFO [RS:1;jenkins-hbase4:35165] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35165,1689675306111; zookeeper connection closed. 2023-07-18 10:15:11,013 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:35165-0x10177ed96110002, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:11,019 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4331795b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4331795b 2023-07-18 10:15:11,113 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:11,113 INFO [RS:0;jenkins-hbase4:37027] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37027,1689675305953; zookeeper connection closed. 2023-07-18 10:15:11,113 DEBUG [Listener at localhost/44679-EventThread] zookeeper.ZKWatcher(600): regionserver:37027-0x10177ed96110001, quorum=127.0.0.1:56417, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 10:15:11,113 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@fbea257] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@fbea257 2023-07-18 10:15:11,113 INFO [Listener at localhost/44679] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 10:15:11,114 WARN [Listener at localhost/44679] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 10:15:11,118 INFO [Listener at localhost/44679] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:15:11,221 WARN [BP-774630301-172.31.14.131-1689675304930 heartbeating to localhost/127.0.0.1:39145] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 10:15:11,222 WARN [BP-774630301-172.31.14.131-1689675304930 heartbeating to localhost/127.0.0.1:39145] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-774630301-172.31.14.131-1689675304930 (Datanode Uuid 5ef0d31e-9a72-4f2c-9b55-6a38121c0be8) service to localhost/127.0.0.1:39145 2023-07-18 10:15:11,224 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data6/current/BP-774630301-172.31.14.131-1689675304930] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:11,224 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data5/current/BP-774630301-172.31.14.131-1689675304930] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:11,225 WARN [Listener at localhost/44679] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 10:15:11,239 INFO [Listener at localhost/44679] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:15:11,342 WARN [BP-774630301-172.31.14.131-1689675304930 heartbeating to localhost/127.0.0.1:39145] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 10:15:11,342 WARN [BP-774630301-172.31.14.131-1689675304930 heartbeating to localhost/127.0.0.1:39145] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-774630301-172.31.14.131-1689675304930 (Datanode Uuid 889dd26b-f065-474d-9af0-febdf961a555) service to localhost/127.0.0.1:39145 2023-07-18 10:15:11,342 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data3/current/BP-774630301-172.31.14.131-1689675304930] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:11,343 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data4/current/BP-774630301-172.31.14.131-1689675304930] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:11,344 WARN [Listener at localhost/44679] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 10:15:11,346 INFO [Listener at localhost/44679] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:15:11,410 WARN [BP-774630301-172.31.14.131-1689675304930 heartbeating to localhost/127.0.0.1:39145] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-774630301-172.31.14.131-1689675304930 (Datanode Uuid b836b7b3-727c-4af5-8105-0d364ba55840) service to localhost/127.0.0.1:39145 2023-07-18 10:15:11,411 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data1/current/BP-774630301-172.31.14.131-1689675304930] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:11,411 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d19a173b-073f-b888-bb58-de35142bed71/cluster_2f951591-5820-0113-0cad-3416d81cccca/dfs/data/data2/current/BP-774630301-172.31.14.131-1689675304930] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 10:15:11,459 INFO [Listener at localhost/44679] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 10:15:11,576 INFO [Listener at localhost/44679] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 10:15:11,607 INFO [Listener at localhost/44679] hbase.HBaseTestingUtility(1293): Minicluster is down