2023-07-13 15:15:50,730 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0 2023-07-13 15:15:50,750 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-13 15:15:50,766 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 15:15:50,766 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3, deleteOnExit=true 2023-07-13 15:15:50,766 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 15:15:50,767 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/test.cache.data in system properties and HBase conf 2023-07-13 15:15:50,768 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 15:15:50,768 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir in system properties and HBase conf 2023-07-13 15:15:50,769 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 15:15:50,769 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 15:15:50,769 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 15:15:50,894 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-13 15:15:51,296 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 15:15:51,300 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 15:15:51,301 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 15:15:51,301 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 15:15:51,301 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 15:15:51,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 15:15:51,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 15:15:51,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 15:15:51,303 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 15:15:51,303 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 15:15:51,303 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/nfs.dump.dir in system properties and HBase conf 2023-07-13 15:15:51,304 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir in system properties and HBase conf 2023-07-13 15:15:51,304 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 15:15:51,304 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 15:15:51,305 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 15:15:51,806 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 15:15:51,811 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 15:15:52,092 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-13 15:15:52,259 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-13 15:15:52,273 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:15:52,310 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:15:52,367 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir/Jetty_localhost_44041_hdfs____a79bdj/webapp 2023-07-13 15:15:52,504 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44041 2023-07-13 15:15:52,542 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 15:15:52,542 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 15:15:52,976 WARN [Listener at localhost/37375] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:15:53,088 WARN [Listener at localhost/37375] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:15:53,106 WARN [Listener at localhost/37375] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:15:53,112 INFO [Listener at localhost/37375] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:15:53,124 INFO [Listener at localhost/37375] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir/Jetty_localhost_35723_datanode____kr8yrh/webapp 2023-07-13 15:15:53,235 INFO [Listener at localhost/37375] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35723 2023-07-13 15:15:53,656 WARN [Listener at localhost/33287] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:15:53,667 WARN [Listener at localhost/33287] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:15:53,670 WARN [Listener at localhost/33287] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:15:53,672 INFO [Listener at localhost/33287] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:15:53,680 INFO [Listener at localhost/33287] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir/Jetty_localhost_36559_datanode____.kpt4n7/webapp 2023-07-13 15:15:53,797 INFO [Listener at localhost/33287] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36559 2023-07-13 15:15:53,818 WARN [Listener at localhost/40775] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:15:53,839 WARN [Listener at localhost/40775] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:15:53,843 WARN [Listener at localhost/40775] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:15:53,845 INFO [Listener at localhost/40775] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:15:53,851 INFO [Listener at localhost/40775] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir/Jetty_localhost_45397_datanode____7j97x3/webapp 2023-07-13 15:15:53,974 INFO [Listener at localhost/40775] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45397 2023-07-13 15:15:53,990 WARN [Listener at localhost/37749] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:15:54,200 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x88acace3f623023: Processing first storage report for DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c from datanode 7de2614e-f73e-48f8-b6ee-51c4ba5eeedf 2023-07-13 15:15:54,202 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x88acace3f623023: from storage DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c node DatanodeRegistration(127.0.0.1:44071, datanodeUuid=7de2614e-f73e-48f8-b6ee-51c4ba5eeedf, infoPort=35939, infoSecurePort=0, ipcPort=37749, storageInfo=lv=-57;cid=testClusterID;nsid=2084871990;c=1689261351889), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-13 15:15:54,202 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfb6cf0b36373974: Processing first storage report for DS-7937480f-287a-496c-8e6d-49e1ae6250f9 from datanode c919d317-4c5f-434b-9ed6-3b0cc7a212cf 2023-07-13 15:15:54,202 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfb6cf0b36373974: from storage DS-7937480f-287a-496c-8e6d-49e1ae6250f9 node DatanodeRegistration(127.0.0.1:36081, datanodeUuid=c919d317-4c5f-434b-9ed6-3b0cc7a212cf, infoPort=35679, infoSecurePort=0, ipcPort=40775, storageInfo=lv=-57;cid=testClusterID;nsid=2084871990;c=1689261351889), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:15:54,202 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe4886e69b8b3b1b9: Processing first storage report for DS-714f3de1-2f7f-4438-96c5-f1f766536cbb from datanode fe0ce298-8b81-44b2-b5d5-733cee6fb2d7 2023-07-13 15:15:54,202 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe4886e69b8b3b1b9: from storage DS-714f3de1-2f7f-4438-96c5-f1f766536cbb node DatanodeRegistration(127.0.0.1:33525, datanodeUuid=fe0ce298-8b81-44b2-b5d5-733cee6fb2d7, infoPort=39759, infoSecurePort=0, ipcPort=33287, storageInfo=lv=-57;cid=testClusterID;nsid=2084871990;c=1689261351889), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 15:15:54,203 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x88acace3f623023: Processing first storage report for DS-df5c84a7-e1de-4338-9149-126ef847e718 from datanode 7de2614e-f73e-48f8-b6ee-51c4ba5eeedf 2023-07-13 15:15:54,203 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x88acace3f623023: from storage DS-df5c84a7-e1de-4338-9149-126ef847e718 node DatanodeRegistration(127.0.0.1:44071, datanodeUuid=7de2614e-f73e-48f8-b6ee-51c4ba5eeedf, infoPort=35939, infoSecurePort=0, ipcPort=37749, storageInfo=lv=-57;cid=testClusterID;nsid=2084871990;c=1689261351889), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:15:54,203 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfb6cf0b36373974: Processing first storage report for DS-bc93c324-3e13-4fce-b099-201bb12dbefe from datanode c919d317-4c5f-434b-9ed6-3b0cc7a212cf 2023-07-13 15:15:54,203 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfb6cf0b36373974: from storage DS-bc93c324-3e13-4fce-b099-201bb12dbefe node DatanodeRegistration(127.0.0.1:36081, datanodeUuid=c919d317-4c5f-434b-9ed6-3b0cc7a212cf, infoPort=35679, infoSecurePort=0, ipcPort=40775, storageInfo=lv=-57;cid=testClusterID;nsid=2084871990;c=1689261351889), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:15:54,203 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe4886e69b8b3b1b9: Processing first storage report for DS-fd6fa27a-cb9b-4074-8d13-566dae07eff0 from datanode fe0ce298-8b81-44b2-b5d5-733cee6fb2d7 2023-07-13 15:15:54,203 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe4886e69b8b3b1b9: from storage DS-fd6fa27a-cb9b-4074-8d13-566dae07eff0 node DatanodeRegistration(127.0.0.1:33525, datanodeUuid=fe0ce298-8b81-44b2-b5d5-733cee6fb2d7, infoPort=39759, infoSecurePort=0, ipcPort=33287, storageInfo=lv=-57;cid=testClusterID;nsid=2084871990;c=1689261351889), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 15:15:54,452 DEBUG [Listener at localhost/37749] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0 2023-07-13 15:15:54,557 INFO [Listener at localhost/37749] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/zookeeper_0, clientPort=52275, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 15:15:54,576 INFO [Listener at localhost/37749] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52275 2023-07-13 15:15:54,588 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:54,590 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:55,299 INFO [Listener at localhost/37749] util.FSUtils(471): Created version file at hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536 with version=8 2023-07-13 15:15:55,299 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/hbase-staging 2023-07-13 15:15:55,312 DEBUG [Listener at localhost/37749] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 15:15:55,312 DEBUG [Listener at localhost/37749] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 15:15:55,312 DEBUG [Listener at localhost/37749] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 15:15:55,313 DEBUG [Listener at localhost/37749] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 15:15:55,696 INFO [Listener at localhost/37749] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-13 15:15:56,261 INFO [Listener at localhost/37749] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:15:56,301 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:56,302 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:56,302 INFO [Listener at localhost/37749] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:15:56,303 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:56,303 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:15:56,471 INFO [Listener at localhost/37749] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:15:56,549 DEBUG [Listener at localhost/37749] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-13 15:15:56,643 INFO [Listener at localhost/37749] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33053 2023-07-13 15:15:56,654 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:56,656 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:56,677 INFO [Listener at localhost/37749] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33053 connecting to ZooKeeper ensemble=127.0.0.1:52275 2023-07-13 15:15:56,723 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:330530x0, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:15:56,726 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33053-0x1015f41312f0000 connected 2023-07-13 15:15:56,751 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:15:56,752 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:15:56,755 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:15:56,764 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33053 2023-07-13 15:15:56,764 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33053 2023-07-13 15:15:56,765 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33053 2023-07-13 15:15:56,765 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33053 2023-07-13 15:15:56,765 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33053 2023-07-13 15:15:56,798 INFO [Listener at localhost/37749] log.Log(170): Logging initialized @6878ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-13 15:15:56,932 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:15:56,933 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:15:56,933 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:15:56,935 INFO [Listener at localhost/37749] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 15:15:56,935 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:15:56,935 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:15:56,939 INFO [Listener at localhost/37749] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:15:56,993 INFO [Listener at localhost/37749] http.HttpServer(1146): Jetty bound to port 40719 2023-07-13 15:15:56,995 INFO [Listener at localhost/37749] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:15:57,024 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,027 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c3edb02{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:15:57,028 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,028 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@72c0c148{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:15:57,216 INFO [Listener at localhost/37749] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:15:57,233 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:15:57,234 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:15:57,236 INFO [Listener at localhost/37749] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:15:57,246 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,274 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@45159224{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir/jetty-0_0_0_0-40719-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5163500168469610330/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:15:57,287 INFO [Listener at localhost/37749] server.AbstractConnector(333): Started ServerConnector@2fa4614f{HTTP/1.1, (http/1.1)}{0.0.0.0:40719} 2023-07-13 15:15:57,287 INFO [Listener at localhost/37749] server.Server(415): Started @7367ms 2023-07-13 15:15:57,291 INFO [Listener at localhost/37749] master.HMaster(444): hbase.rootdir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536, hbase.cluster.distributed=false 2023-07-13 15:15:57,367 INFO [Listener at localhost/37749] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:15:57,368 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:57,368 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:57,368 INFO [Listener at localhost/37749] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:15:57,368 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:57,368 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:15:57,374 INFO [Listener at localhost/37749] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:15:57,377 INFO [Listener at localhost/37749] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32995 2023-07-13 15:15:57,379 INFO [Listener at localhost/37749] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:15:57,386 DEBUG [Listener at localhost/37749] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:15:57,387 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:57,389 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:57,390 INFO [Listener at localhost/37749] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32995 connecting to ZooKeeper ensemble=127.0.0.1:52275 2023-07-13 15:15:57,394 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:329950x0, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:15:57,395 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:329950x0, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:15:57,395 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32995-0x1015f41312f0001 connected 2023-07-13 15:15:57,397 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:15:57,398 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:15:57,398 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32995 2023-07-13 15:15:57,399 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32995 2023-07-13 15:15:57,400 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32995 2023-07-13 15:15:57,401 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32995 2023-07-13 15:15:57,401 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32995 2023-07-13 15:15:57,403 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:15:57,403 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:15:57,404 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:15:57,405 INFO [Listener at localhost/37749] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:15:57,405 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:15:57,405 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:15:57,405 INFO [Listener at localhost/37749] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:15:57,407 INFO [Listener at localhost/37749] http.HttpServer(1146): Jetty bound to port 35811 2023-07-13 15:15:57,407 INFO [Listener at localhost/37749] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:15:57,409 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,409 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@249e2011{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:15:57,410 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,410 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a6a072{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:15:57,534 INFO [Listener at localhost/37749] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:15:57,536 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:15:57,536 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:15:57,536 INFO [Listener at localhost/37749] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:15:57,537 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,542 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2a1b55bd{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir/jetty-0_0_0_0-35811-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7261637664113523734/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:15:57,543 INFO [Listener at localhost/37749] server.AbstractConnector(333): Started ServerConnector@4cab7999{HTTP/1.1, (http/1.1)}{0.0.0.0:35811} 2023-07-13 15:15:57,543 INFO [Listener at localhost/37749] server.Server(415): Started @7623ms 2023-07-13 15:15:57,556 INFO [Listener at localhost/37749] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:15:57,556 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:57,556 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:57,557 INFO [Listener at localhost/37749] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:15:57,557 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:57,557 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:15:57,558 INFO [Listener at localhost/37749] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:15:57,560 INFO [Listener at localhost/37749] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44089 2023-07-13 15:15:57,560 INFO [Listener at localhost/37749] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:15:57,563 DEBUG [Listener at localhost/37749] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:15:57,564 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:57,566 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:57,567 INFO [Listener at localhost/37749] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44089 connecting to ZooKeeper ensemble=127.0.0.1:52275 2023-07-13 15:15:57,571 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:440890x0, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:15:57,572 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44089-0x1015f41312f0002 connected 2023-07-13 15:15:57,572 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:15:57,573 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:15:57,574 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:15:57,577 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44089 2023-07-13 15:15:57,578 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44089 2023-07-13 15:15:57,578 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44089 2023-07-13 15:15:57,579 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44089 2023-07-13 15:15:57,582 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44089 2023-07-13 15:15:57,584 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:15:57,585 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:15:57,585 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:15:57,585 INFO [Listener at localhost/37749] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:15:57,585 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:15:57,585 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:15:57,586 INFO [Listener at localhost/37749] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:15:57,586 INFO [Listener at localhost/37749] http.HttpServer(1146): Jetty bound to port 45317 2023-07-13 15:15:57,587 INFO [Listener at localhost/37749] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:15:57,594 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,595 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2afce463{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:15:57,595 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,595 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ec386b4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:15:57,732 INFO [Listener at localhost/37749] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:15:57,733 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:15:57,733 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:15:57,733 INFO [Listener at localhost/37749] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:15:57,734 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,735 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4d520d27{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir/jetty-0_0_0_0-45317-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4461614109229397811/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:15:57,737 INFO [Listener at localhost/37749] server.AbstractConnector(333): Started ServerConnector@72b0dcfa{HTTP/1.1, (http/1.1)}{0.0.0.0:45317} 2023-07-13 15:15:57,737 INFO [Listener at localhost/37749] server.Server(415): Started @7817ms 2023-07-13 15:15:57,749 INFO [Listener at localhost/37749] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:15:57,749 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:57,749 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:57,749 INFO [Listener at localhost/37749] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:15:57,749 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:15:57,750 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:15:57,750 INFO [Listener at localhost/37749] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:15:57,751 INFO [Listener at localhost/37749] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40971 2023-07-13 15:15:57,752 INFO [Listener at localhost/37749] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:15:57,753 DEBUG [Listener at localhost/37749] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:15:57,754 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:57,755 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:57,756 INFO [Listener at localhost/37749] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40971 connecting to ZooKeeper ensemble=127.0.0.1:52275 2023-07-13 15:15:57,759 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:409710x0, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:15:57,761 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40971-0x1015f41312f0003 connected 2023-07-13 15:15:57,761 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:15:57,762 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:15:57,762 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:15:57,763 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40971 2023-07-13 15:15:57,763 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40971 2023-07-13 15:15:57,764 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40971 2023-07-13 15:15:57,764 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40971 2023-07-13 15:15:57,764 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40971 2023-07-13 15:15:57,767 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:15:57,767 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:15:57,767 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:15:57,767 INFO [Listener at localhost/37749] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:15:57,767 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:15:57,767 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:15:57,768 INFO [Listener at localhost/37749] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:15:57,769 INFO [Listener at localhost/37749] http.HttpServer(1146): Jetty bound to port 33029 2023-07-13 15:15:57,769 INFO [Listener at localhost/37749] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:15:57,772 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,772 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@477c886b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:15:57,773 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,773 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5526bfb1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:15:57,896 INFO [Listener at localhost/37749] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:15:57,897 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:15:57,898 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:15:57,898 INFO [Listener at localhost/37749] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:15:57,899 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:15:57,900 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@36c7be16{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir/jetty-0_0_0_0-33029-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6155610243286613202/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:15:57,901 INFO [Listener at localhost/37749] server.AbstractConnector(333): Started ServerConnector@58b8c90a{HTTP/1.1, (http/1.1)}{0.0.0.0:33029} 2023-07-13 15:15:57,901 INFO [Listener at localhost/37749] server.Server(415): Started @7981ms 2023-07-13 15:15:57,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:15:57,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5e1629a3{HTTP/1.1, (http/1.1)}{0.0.0.0:38277} 2023-07-13 15:15:57,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @7994ms 2023-07-13 15:15:57,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:15:57,928 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:15:57,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:15:57,959 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:15:57,959 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:15:57,961 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:15:57,961 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:15:57,961 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:15:57,964 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:15:57,965 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:15:57,965 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33053,1689261355495 from backup master directory 2023-07-13 15:15:57,971 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:15:57,972 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:15:57,972 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:15:57,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:15:57,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-13 15:15:57,977 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-13 15:15:58,109 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/hbase.id with ID: 21793049-9caa-430c-8afc-71847c49302d 2023-07-13 15:15:58,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:15:58,191 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:15:58,277 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2ba7a330 to 127.0.0.1:52275 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:15:58,325 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e3a5e47, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:15:58,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:15:58,365 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 15:15:58,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-13 15:15:58,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-13 15:15:58,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:15:58,399 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:15:58,400 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:15:58,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/data/master/store-tmp 2023-07-13 15:15:58,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:15:58,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:15:58,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:15:58,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:15:58,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:15:58,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:15:58,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:15:58,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:15:58,486 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/WALs/jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:15:58,508 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33053%2C1689261355495, suffix=, logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/WALs/jenkins-hbase4.apache.org,33053,1689261355495, archiveDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/oldWALs, maxLogs=10 2023-07-13 15:15:58,592 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK] 2023-07-13 15:15:58,603 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK] 2023-07-13 15:15:58,593 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK] 2023-07-13 15:15:58,622 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:15:58,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/WALs/jenkins-hbase4.apache.org,33053,1689261355495/jenkins-hbase4.apache.org%2C33053%2C1689261355495.1689261358518 2023-07-13 15:15:58,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK], DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK], DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK]] 2023-07-13 15:15:58,723 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:15:58,723 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:15:58,727 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:15:58,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:15:58,790 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:15:58,797 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 15:15:58,835 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 15:15:58,846 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:15:58,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:15:58,853 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:15:58,868 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:15:58,872 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:15:58,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10463786880, jitterRate=-0.025483906269073486}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:15:58,873 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:15:58,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 15:15:58,902 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 15:15:58,902 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 15:15:58,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 15:15:58,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-13 15:15:58,943 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 36 msec 2023-07-13 15:15:58,943 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 15:15:58,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 15:15:58,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 15:15:58,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 15:15:58,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 15:15:58,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 15:15:58,993 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:15:58,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 15:15:58,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 15:15:59,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 15:15:59,013 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:15:59,013 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:15:59,013 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:15:59,013 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:15:59,013 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:15:59,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33053,1689261355495, sessionid=0x1015f41312f0000, setting cluster-up flag (Was=false) 2023-07-13 15:15:59,031 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:15:59,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 15:15:59,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:15:59,043 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:15:59,047 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 15:15:59,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:15:59,051 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.hbase-snapshot/.tmp 2023-07-13 15:15:59,106 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(951): ClusterId : 21793049-9caa-430c-8afc-71847c49302d 2023-07-13 15:15:59,106 INFO [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(951): ClusterId : 21793049-9caa-430c-8afc-71847c49302d 2023-07-13 15:15:59,107 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(951): ClusterId : 21793049-9caa-430c-8afc-71847c49302d 2023-07-13 15:15:59,117 DEBUG [RS:0;jenkins-hbase4:32995] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:15:59,117 DEBUG [RS:2;jenkins-hbase4:40971] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:15:59,117 DEBUG [RS:1;jenkins-hbase4:44089] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:15:59,125 DEBUG [RS:0;jenkins-hbase4:32995] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:15:59,125 DEBUG [RS:1;jenkins-hbase4:44089] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:15:59,125 DEBUG [RS:2;jenkins-hbase4:40971] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:15:59,125 DEBUG [RS:1;jenkins-hbase4:44089] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:15:59,125 DEBUG [RS:0;jenkins-hbase4:32995] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:15:59,125 DEBUG [RS:2;jenkins-hbase4:40971] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:15:59,130 DEBUG [RS:0;jenkins-hbase4:32995] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:15:59,130 DEBUG [RS:1;jenkins-hbase4:44089] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:15:59,130 DEBUG [RS:2;jenkins-hbase4:40971] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:15:59,132 DEBUG [RS:0;jenkins-hbase4:32995] zookeeper.ReadOnlyZKClient(139): Connect 0x218e60a3 to 127.0.0.1:52275 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:15:59,132 DEBUG [RS:1;jenkins-hbase4:44089] zookeeper.ReadOnlyZKClient(139): Connect 0x5af43e40 to 127.0.0.1:52275 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:15:59,132 DEBUG [RS:2;jenkins-hbase4:40971] zookeeper.ReadOnlyZKClient(139): Connect 0x0cda3187 to 127.0.0.1:52275 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:15:59,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 15:15:59,146 DEBUG [RS:0;jenkins-hbase4:32995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@454ae611, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:15:59,147 DEBUG [RS:2;jenkins-hbase4:40971] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@21dde6c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:15:59,147 DEBUG [RS:0;jenkins-hbase4:32995] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a723910, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:15:59,147 DEBUG [RS:2;jenkins-hbase4:40971] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6225af7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:15:59,147 DEBUG [RS:1;jenkins-hbase4:44089] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6818191c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:15:59,147 DEBUG [RS:1;jenkins-hbase4:44089] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@234a7715, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:15:59,150 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 15:15:59,155 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:15:59,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 15:15:59,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 15:15:59,177 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:44089 2023-07-13 15:15:59,181 DEBUG [RS:0;jenkins-hbase4:32995] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:32995 2023-07-13 15:15:59,182 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:40971 2023-07-13 15:15:59,183 INFO [RS:0;jenkins-hbase4:32995] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:15:59,183 INFO [RS:1;jenkins-hbase4:44089] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:15:59,184 INFO [RS:1;jenkins-hbase4:44089] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:15:59,183 INFO [RS:2;jenkins-hbase4:40971] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:15:59,185 INFO [RS:2;jenkins-hbase4:40971] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:15:59,185 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:15:59,184 INFO [RS:0;jenkins-hbase4:32995] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:15:59,185 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:15:59,185 DEBUG [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:15:59,188 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33053,1689261355495 with isa=jenkins-hbase4.apache.org/172.31.14.131:40971, startcode=1689261357748 2023-07-13 15:15:59,188 INFO [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33053,1689261355495 with isa=jenkins-hbase4.apache.org/172.31.14.131:32995, startcode=1689261357367 2023-07-13 15:15:59,188 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33053,1689261355495 with isa=jenkins-hbase4.apache.org/172.31.14.131:44089, startcode=1689261357555 2023-07-13 15:15:59,215 DEBUG [RS:1;jenkins-hbase4:44089] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:15:59,215 DEBUG [RS:0;jenkins-hbase4:32995] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:15:59,215 DEBUG [RS:2;jenkins-hbase4:40971] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:15:59,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 15:15:59,290 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36921, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:15:59,290 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58265, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:15:59,291 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56071, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:15:59,303 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:15:59,318 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:15:59,322 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:15:59,355 DEBUG [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 15:15:59,355 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 15:15:59,355 WARN [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 15:15:59,355 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 15:15:59,355 WARN [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 15:15:59,355 WARN [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 15:15:59,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:15:59,366 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:15:59,367 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:15:59,367 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:15:59,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:15:59,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:15:59,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:15:59,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:15:59,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-13 15:15:59,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:15:59,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689261389370 2023-07-13 15:15:59,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 15:15:59,377 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 15:15:59,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 15:15:59,378 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 15:15:59,381 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 15:15:59,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 15:15:59,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 15:15:59,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 15:15:59,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 15:15:59,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 15:15:59,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 15:15:59,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 15:15:59,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 15:15:59,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 15:15:59,402 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261359402,5,FailOnTimeoutGroup] 2023-07-13 15:15:59,402 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261359402,5,FailOnTimeoutGroup] 2023-07-13 15:15:59,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 15:15:59,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,457 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33053,1689261355495 with isa=jenkins-hbase4.apache.org/172.31.14.131:44089, startcode=1689261357555 2023-07-13 15:15:59,458 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33053,1689261355495 with isa=jenkins-hbase4.apache.org/172.31.14.131:40971, startcode=1689261357748 2023-07-13 15:15:59,458 INFO [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33053,1689261355495 with isa=jenkins-hbase4.apache.org/172.31.14.131:32995, startcode=1689261357367 2023-07-13 15:15:59,461 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 15:15:59,462 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 15:15:59,463 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536 2023-07-13 15:15:59,465 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33053] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:15:59,467 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:15:59,469 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 15:15:59,480 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33053] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:15:59,480 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:15:59,480 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 15:15:59,481 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536 2023-07-13 15:15:59,481 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37375 2023-07-13 15:15:59,481 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40719 2023-07-13 15:15:59,483 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536 2023-07-13 15:15:59,483 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33053] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:15:59,483 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37375 2023-07-13 15:15:59,483 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40719 2023-07-13 15:15:59,483 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:15:59,484 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 15:15:59,486 DEBUG [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536 2023-07-13 15:15:59,486 DEBUG [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37375 2023-07-13 15:15:59,486 DEBUG [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40719 2023-07-13 15:15:59,504 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:15:59,507 DEBUG [RS:2;jenkins-hbase4:40971] zookeeper.ZKUtil(162): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:15:59,507 DEBUG [RS:1;jenkins-hbase4:44089] zookeeper.ZKUtil(162): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:15:59,508 DEBUG [RS:0;jenkins-hbase4:32995] zookeeper.ZKUtil(162): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:15:59,508 WARN [RS:2;jenkins-hbase4:40971] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:15:59,508 INFO [RS:2;jenkins-hbase4:40971] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:15:59,509 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:15:59,508 WARN [RS:1;jenkins-hbase4:44089] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:15:59,508 WARN [RS:0;jenkins-hbase4:32995] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:15:59,512 INFO [RS:0;jenkins-hbase4:32995] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:15:59,512 DEBUG [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:15:59,511 INFO [RS:1;jenkins-hbase4:44089] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:15:59,513 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:15:59,514 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44089,1689261357555] 2023-07-13 15:15:59,514 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40971,1689261357748] 2023-07-13 15:15:59,514 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32995,1689261357367] 2023-07-13 15:15:59,530 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:15:59,532 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:15:59,535 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info 2023-07-13 15:15:59,536 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:15:59,536 DEBUG [RS:0;jenkins-hbase4:32995] zookeeper.ZKUtil(162): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:15:59,536 DEBUG [RS:1;jenkins-hbase4:44089] zookeeper.ZKUtil(162): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:15:59,536 DEBUG [RS:2;jenkins-hbase4:40971] zookeeper.ZKUtil(162): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:15:59,538 DEBUG [RS:1;jenkins-hbase4:44089] zookeeper.ZKUtil(162): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:15:59,538 DEBUG [RS:2;jenkins-hbase4:40971] zookeeper.ZKUtil(162): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:15:59,538 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:15:59,538 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:15:59,538 DEBUG [RS:0;jenkins-hbase4:32995] zookeeper.ZKUtil(162): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:15:59,538 DEBUG [RS:1;jenkins-hbase4:44089] zookeeper.ZKUtil(162): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:15:59,538 DEBUG [RS:2;jenkins-hbase4:40971] zookeeper.ZKUtil(162): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:15:59,539 DEBUG [RS:0;jenkins-hbase4:32995] zookeeper.ZKUtil(162): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:15:59,541 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:15:59,542 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:15:59,543 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:15:59,543 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:15:59,545 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table 2023-07-13 15:15:59,545 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:15:59,546 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:15:59,550 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740 2023-07-13 15:15:59,551 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740 2023-07-13 15:15:59,555 DEBUG [RS:0;jenkins-hbase4:32995] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:15:59,555 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:15:59,555 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:15:59,560 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:15:59,563 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:15:59,570 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:15:59,571 INFO [RS:1;jenkins-hbase4:44089] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:15:59,571 INFO [RS:0;jenkins-hbase4:32995] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:15:59,571 INFO [RS:2;jenkins-hbase4:40971] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:15:59,571 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9406985120, jitterRate=-0.12390623986721039}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:15:59,571 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:15:59,572 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:15:59,572 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:15:59,572 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:15:59,572 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:15:59,572 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:15:59,573 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:15:59,573 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:15:59,586 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 15:15:59,586 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 15:15:59,600 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 15:15:59,608 INFO [RS:2;jenkins-hbase4:40971] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:15:59,608 INFO [RS:0;jenkins-hbase4:32995] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:15:59,609 INFO [RS:1;jenkins-hbase4:44089] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:15:59,614 INFO [RS:2;jenkins-hbase4:40971] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:15:59,614 INFO [RS:1;jenkins-hbase4:44089] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:15:59,615 INFO [RS:2;jenkins-hbase4:40971] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,615 INFO [RS:1;jenkins-hbase4:44089] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,615 INFO [RS:0;jenkins-hbase4:32995] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:15:59,615 INFO [RS:0;jenkins-hbase4:32995] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,616 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:15:59,616 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:15:59,616 INFO [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:15:59,636 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 15:15:59,639 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 15:15:59,645 INFO [RS:0;jenkins-hbase4:32995] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,645 INFO [RS:2;jenkins-hbase4:40971] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,645 INFO [RS:1;jenkins-hbase4:44089] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,645 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,646 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,647 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,647 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,647 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,647 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,647 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:15:59,648 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,647 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:15:59,648 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:15:59,649 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,649 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,649 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,649 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,649 DEBUG [RS:1;jenkins-hbase4:44089] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,649 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,648 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,649 DEBUG [RS:0;jenkins-hbase4:32995] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,649 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,649 DEBUG [RS:2;jenkins-hbase4:40971] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:15:59,710 INFO [RS:2;jenkins-hbase4:40971] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,711 INFO [RS:2;jenkins-hbase4:40971] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,711 INFO [RS:2;jenkins-hbase4:40971] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,711 INFO [RS:1;jenkins-hbase4:44089] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,712 INFO [RS:1;jenkins-hbase4:44089] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,712 INFO [RS:0;jenkins-hbase4:32995] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,712 INFO [RS:1;jenkins-hbase4:44089] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,712 INFO [RS:0;jenkins-hbase4:32995] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,712 INFO [RS:0;jenkins-hbase4:32995] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,769 INFO [RS:1;jenkins-hbase4:44089] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:15:59,769 INFO [RS:0;jenkins-hbase4:32995] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:15:59,769 INFO [RS:2;jenkins-hbase4:40971] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:15:59,773 INFO [RS:1;jenkins-hbase4:44089] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44089,1689261357555-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,773 INFO [RS:0;jenkins-hbase4:32995] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32995,1689261357367-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,773 INFO [RS:2;jenkins-hbase4:40971] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40971,1689261357748-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:15:59,791 INFO [RS:1;jenkins-hbase4:44089] regionserver.Replication(203): jenkins-hbase4.apache.org,44089,1689261357555 started 2023-07-13 15:15:59,791 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44089,1689261357555, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44089, sessionid=0x1015f41312f0002 2023-07-13 15:15:59,791 DEBUG [jenkins-hbase4:33053] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 15:15:59,796 INFO [RS:0;jenkins-hbase4:32995] regionserver.Replication(203): jenkins-hbase4.apache.org,32995,1689261357367 started 2023-07-13 15:15:59,796 DEBUG [RS:1;jenkins-hbase4:44089] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:15:59,797 INFO [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32995,1689261357367, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32995, sessionid=0x1015f41312f0001 2023-07-13 15:15:59,797 DEBUG [RS:1;jenkins-hbase4:44089] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:15:59,797 DEBUG [RS:1;jenkins-hbase4:44089] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44089,1689261357555' 2023-07-13 15:15:59,797 DEBUG [RS:0;jenkins-hbase4:32995] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:15:59,797 DEBUG [RS:1;jenkins-hbase4:44089] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:15:59,797 DEBUG [RS:0;jenkins-hbase4:32995] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:15:59,798 DEBUG [RS:0;jenkins-hbase4:32995] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32995,1689261357367' 2023-07-13 15:15:59,799 DEBUG [RS:0;jenkins-hbase4:32995] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:15:59,799 DEBUG [RS:1;jenkins-hbase4:44089] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:15:59,800 DEBUG [RS:0;jenkins-hbase4:32995] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:15:59,800 DEBUG [RS:0;jenkins-hbase4:32995] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:15:59,800 DEBUG [RS:1;jenkins-hbase4:44089] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:15:59,800 DEBUG [RS:0;jenkins-hbase4:32995] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:15:59,800 DEBUG [RS:1;jenkins-hbase4:44089] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:15:59,800 DEBUG [RS:0;jenkins-hbase4:32995] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:15:59,801 DEBUG [RS:0;jenkins-hbase4:32995] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32995,1689261357367' 2023-07-13 15:15:59,801 DEBUG [RS:0;jenkins-hbase4:32995] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:15:59,801 DEBUG [RS:0;jenkins-hbase4:32995] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:15:59,801 DEBUG [RS:0;jenkins-hbase4:32995] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:15:59,802 INFO [RS:0;jenkins-hbase4:32995] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:15:59,802 INFO [RS:0;jenkins-hbase4:32995] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:15:59,800 DEBUG [RS:1;jenkins-hbase4:44089] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:15:59,806 INFO [RS:2;jenkins-hbase4:40971] regionserver.Replication(203): jenkins-hbase4.apache.org,40971,1689261357748 started 2023-07-13 15:15:59,807 DEBUG [RS:1;jenkins-hbase4:44089] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44089,1689261357555' 2023-07-13 15:15:59,807 DEBUG [RS:1;jenkins-hbase4:44089] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:15:59,807 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40971,1689261357748, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40971, sessionid=0x1015f41312f0003 2023-07-13 15:15:59,807 DEBUG [RS:2;jenkins-hbase4:40971] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:15:59,807 DEBUG [RS:2;jenkins-hbase4:40971] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:15:59,807 DEBUG [RS:2;jenkins-hbase4:40971] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40971,1689261357748' 2023-07-13 15:15:59,807 DEBUG [RS:2;jenkins-hbase4:40971] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:15:59,807 DEBUG [RS:1;jenkins-hbase4:44089] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:15:59,808 DEBUG [RS:2;jenkins-hbase4:40971] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:15:59,808 DEBUG [RS:1;jenkins-hbase4:44089] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:15:59,808 INFO [RS:1;jenkins-hbase4:44089] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:15:59,808 INFO [RS:1;jenkins-hbase4:44089] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:15:59,809 DEBUG [RS:2;jenkins-hbase4:40971] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:15:59,809 DEBUG [RS:2;jenkins-hbase4:40971] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:15:59,809 DEBUG [RS:2;jenkins-hbase4:40971] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:15:59,810 DEBUG [RS:2;jenkins-hbase4:40971] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40971,1689261357748' 2023-07-13 15:15:59,810 DEBUG [RS:2;jenkins-hbase4:40971] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:15:59,810 DEBUG [RS:2;jenkins-hbase4:40971] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:15:59,810 DEBUG [jenkins-hbase4:33053] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:15:59,811 DEBUG [RS:2;jenkins-hbase4:40971] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:15:59,811 INFO [RS:2;jenkins-hbase4:40971] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:15:59,811 INFO [RS:2;jenkins-hbase4:40971] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:15:59,812 DEBUG [jenkins-hbase4:33053] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:15:59,812 DEBUG [jenkins-hbase4:33053] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:15:59,812 DEBUG [jenkins-hbase4:33053] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:15:59,812 DEBUG [jenkins-hbase4:33053] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:15:59,815 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40971,1689261357748, state=OPENING 2023-07-13 15:15:59,823 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 15:15:59,824 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:15:59,825 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:15:59,829 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:15:59,914 INFO [RS:1;jenkins-hbase4:44089] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44089%2C1689261357555, suffix=, logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,44089,1689261357555, archiveDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs, maxLogs=32 2023-07-13 15:15:59,914 INFO [RS:0;jenkins-hbase4:32995] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32995%2C1689261357367, suffix=, logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,32995,1689261357367, archiveDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs, maxLogs=32 2023-07-13 15:15:59,914 INFO [RS:2;jenkins-hbase4:40971] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40971%2C1689261357748, suffix=, logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,40971,1689261357748, archiveDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs, maxLogs=32 2023-07-13 15:15:59,950 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK] 2023-07-13 15:15:59,950 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK] 2023-07-13 15:15:59,951 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK] 2023-07-13 15:15:59,953 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK] 2023-07-13 15:15:59,953 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK] 2023-07-13 15:15:59,954 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK] 2023-07-13 15:15:59,962 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK] 2023-07-13 15:15:59,962 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK] 2023-07-13 15:15:59,962 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK] 2023-07-13 15:15:59,969 INFO [RS:0;jenkins-hbase4:32995] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,32995,1689261357367/jenkins-hbase4.apache.org%2C32995%2C1689261357367.1689261359920 2023-07-13 15:15:59,976 DEBUG [RS:0;jenkins-hbase4:32995] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK], DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK], DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK]] 2023-07-13 15:15:59,984 WARN [ReadOnlyZKClient-127.0.0.1:52275@0x2ba7a330] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 15:15:59,987 INFO [RS:2;jenkins-hbase4:40971] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,40971,1689261357748/jenkins-hbase4.apache.org%2C40971%2C1689261357748.1689261359920 2023-07-13 15:15:59,987 INFO [RS:1;jenkins-hbase4:44089] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,44089,1689261357555/jenkins-hbase4.apache.org%2C44089%2C1689261357555.1689261359920 2023-07-13 15:15:59,987 DEBUG [RS:2;jenkins-hbase4:40971] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK], DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK], DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK]] 2023-07-13 15:15:59,993 DEBUG [RS:1;jenkins-hbase4:44089] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK], DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK], DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK]] 2023-07-13 15:16:00,013 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:00,017 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:00,019 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33053,1689261355495] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:00,021 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35618, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:00,022 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35616, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:00,023 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40971] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:35616 deadline: 1689261420022, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:00,039 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 15:16:00,039 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:00,044 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40971%2C1689261357748.meta, suffix=.meta, logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,40971,1689261357748, archiveDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs, maxLogs=32 2023-07-13 15:16:00,069 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK] 2023-07-13 15:16:00,071 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK] 2023-07-13 15:16:00,071 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK] 2023-07-13 15:16:00,085 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,40971,1689261357748/jenkins-hbase4.apache.org%2C40971%2C1689261357748.meta.1689261360045.meta 2023-07-13 15:16:00,086 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK], DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK], DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK]] 2023-07-13 15:16:00,086 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:00,088 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:00,092 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 15:16:00,094 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 15:16:00,100 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 15:16:00,100 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:00,100 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 15:16:00,100 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 15:16:00,106 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:00,109 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info 2023-07-13 15:16:00,109 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info 2023-07-13 15:16:00,113 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:00,119 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:00,119 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:00,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:00,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:00,122 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:00,123 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:00,123 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:00,125 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table 2023-07-13 15:16:00,125 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table 2023-07-13 15:16:00,125 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:00,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:00,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740 2023-07-13 15:16:00,136 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740 2023-07-13 15:16:00,142 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:00,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:00,147 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9707506880, jitterRate=-0.0959179699420929}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:00,147 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:00,157 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689261360007 2023-07-13 15:16:00,180 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 15:16:00,181 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 15:16:00,181 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40971,1689261357748, state=OPEN 2023-07-13 15:16:00,184 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:00,184 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:00,189 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 15:16:00,189 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40971,1689261357748 in 355 msec 2023-07-13 15:16:00,194 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 15:16:00,194 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 591 msec 2023-07-13 15:16:00,199 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0310 sec 2023-07-13 15:16:00,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689261360199, completionTime=-1 2023-07-13 15:16:00,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 15:16:00,199 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 15:16:00,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 15:16:00,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689261420268 2023-07-13 15:16:00,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689261480268 2023-07-13 15:16:00,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 68 msec 2023-07-13 15:16:00,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33053,1689261355495-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:00,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33053,1689261355495-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:00,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33053,1689261355495-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:00,297 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33053, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:00,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:00,306 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 15:16:00,321 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 15:16:00,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:00,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 15:16:00,343 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:00,346 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:00,365 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:00,369 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142 empty. 2023-07-13 15:16:00,370 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:00,370 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 15:16:00,431 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:00,433 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1c39d35808badfb6a5d66d7a6a08f142, NAME => 'hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:00,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:00,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 1c39d35808badfb6a5d66d7a6a08f142, disabling compactions & flushes 2023-07-13 15:16:00,456 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:00,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:00,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. after waiting 0 ms 2023-07-13 15:16:00,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:00,456 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:00,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 1c39d35808badfb6a5d66d7a6a08f142: 2023-07-13 15:16:00,461 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:00,477 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261360464"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261360464"}]},"ts":"1689261360464"} 2023-07-13 15:16:00,505 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:00,506 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:00,511 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261360506"}]},"ts":"1689261360506"} 2023-07-13 15:16:00,515 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 15:16:00,519 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:00,519 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:00,519 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:00,519 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:00,519 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:00,521 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1c39d35808badfb6a5d66d7a6a08f142, ASSIGN}] 2023-07-13 15:16:00,524 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1c39d35808badfb6a5d66d7a6a08f142, ASSIGN 2023-07-13 15:16:00,526 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1c39d35808badfb6a5d66d7a6a08f142, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:00,543 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33053,1689261355495] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:00,546 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33053,1689261355495] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 15:16:00,549 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:00,551 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:00,555 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:00,556 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d empty. 2023-07-13 15:16:00,556 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:00,557 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 15:16:00,582 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:00,584 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 24214add90ee9cbdd631baadba96052d, NAME => 'hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:00,609 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:00,609 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 24214add90ee9cbdd631baadba96052d, disabling compactions & flushes 2023-07-13 15:16:00,609 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:00,609 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:00,609 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. after waiting 0 ms 2023-07-13 15:16:00,609 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:00,609 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:00,609 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 24214add90ee9cbdd631baadba96052d: 2023-07-13 15:16:00,613 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:00,615 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261360615"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261360615"}]},"ts":"1689261360615"} 2023-07-13 15:16:00,622 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:00,624 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:00,625 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261360625"}]},"ts":"1689261360625"} 2023-07-13 15:16:00,628 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 15:16:00,640 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:00,640 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:00,640 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:00,640 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:00,640 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:00,640 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=24214add90ee9cbdd631baadba96052d, ASSIGN}] 2023-07-13 15:16:00,645 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=24214add90ee9cbdd631baadba96052d, ASSIGN 2023-07-13 15:16:00,647 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=24214add90ee9cbdd631baadba96052d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32995,1689261357367; forceNewPlan=false, retain=false 2023-07-13 15:16:00,648 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-13 15:16:00,652 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1c39d35808badfb6a5d66d7a6a08f142, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:00,652 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261360651"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261360651"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261360651"}]},"ts":"1689261360651"} 2023-07-13 15:16:00,655 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=24214add90ee9cbdd631baadba96052d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:00,655 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261360655"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261360655"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261360655"}]},"ts":"1689261360655"} 2023-07-13 15:16:00,660 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 1c39d35808badfb6a5d66d7a6a08f142, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:00,664 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 24214add90ee9cbdd631baadba96052d, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:00,818 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:00,818 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:00,823 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38924, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:00,825 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:00,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1c39d35808badfb6a5d66d7a6a08f142, NAME => 'hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:00,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:00,826 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:00,826 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:00,826 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:00,828 INFO [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:00,828 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:00,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 24214add90ee9cbdd631baadba96052d, NAME => 'hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:00,829 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:00,829 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. service=MultiRowMutationService 2023-07-13 15:16:00,829 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 15:16:00,829 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:00,830 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:00,830 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:00,830 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:00,831 DEBUG [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/info 2023-07-13 15:16:00,831 DEBUG [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/info 2023-07-13 15:16:00,832 INFO [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1c39d35808badfb6a5d66d7a6a08f142 columnFamilyName info 2023-07-13 15:16:00,833 INFO [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] regionserver.HStore(310): Store=1c39d35808badfb6a5d66d7a6a08f142/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:00,835 INFO [StoreOpener-24214add90ee9cbdd631baadba96052d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:00,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:00,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:00,838 DEBUG [StoreOpener-24214add90ee9cbdd631baadba96052d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/m 2023-07-13 15:16:00,838 DEBUG [StoreOpener-24214add90ee9cbdd631baadba96052d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/m 2023-07-13 15:16:00,839 INFO [StoreOpener-24214add90ee9cbdd631baadba96052d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 24214add90ee9cbdd631baadba96052d columnFamilyName m 2023-07-13 15:16:00,840 INFO [StoreOpener-24214add90ee9cbdd631baadba96052d-1] regionserver.HStore(310): Store=24214add90ee9cbdd631baadba96052d/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:00,842 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:00,844 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:00,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:00,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:00,849 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1c39d35808badfb6a5d66d7a6a08f142; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11996688960, jitterRate=0.11727872490882874}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:00,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1c39d35808badfb6a5d66d7a6a08f142: 2023-07-13 15:16:00,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:00,852 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142., pid=8, masterSystemTime=1689261360816 2023-07-13 15:16:00,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:00,861 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 24214add90ee9cbdd631baadba96052d; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5622d638, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:00,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 24214add90ee9cbdd631baadba96052d: 2023-07-13 15:16:00,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:00,861 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:00,863 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d., pid=9, masterSystemTime=1689261360818 2023-07-13 15:16:00,865 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1c39d35808badfb6a5d66d7a6a08f142, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:00,866 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261360863"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261360863"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261360863"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261360863"}]},"ts":"1689261360863"} 2023-07-13 15:16:00,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:00,873 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:00,874 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=24214add90ee9cbdd631baadba96052d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:00,875 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-13 15:16:00,875 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261360873"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261360873"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261360873"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261360873"}]},"ts":"1689261360873"} 2023-07-13 15:16:00,875 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 1c39d35808badfb6a5d66d7a6a08f142, server=jenkins-hbase4.apache.org,40971,1689261357748 in 209 msec 2023-07-13 15:16:00,883 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-13 15:16:00,883 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1c39d35808badfb6a5d66d7a6a08f142, ASSIGN in 354 msec 2023-07-13 15:16:00,885 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-13 15:16:00,890 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 24214add90ee9cbdd631baadba96052d, server=jenkins-hbase4.apache.org,32995,1689261357367 in 217 msec 2023-07-13 15:16:00,891 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:00,892 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261360892"}]},"ts":"1689261360892"} 2023-07-13 15:16:00,901 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-13 15:16:00,901 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 15:16:00,901 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=24214add90ee9cbdd631baadba96052d, ASSIGN in 245 msec 2023-07-13 15:16:00,903 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:00,903 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261360903"}]},"ts":"1689261360903"} 2023-07-13 15:16:00,905 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:00,910 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 580 msec 2023-07-13 15:16:00,911 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 15:16:00,915 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:00,920 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 371 msec 2023-07-13 15:16:00,942 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 15:16:00,944 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:00,944 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:00,971 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33053,1689261355495] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:00,978 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38930, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:00,988 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 15:16:00,988 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 15:16:00,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 15:16:01,009 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:01,017 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 43 msec 2023-07-13 15:16:01,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 15:16:01,035 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:01,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-07-13 15:16:01,049 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 15:16:01,052 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 15:16:01,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.080sec 2023-07-13 15:16:01,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-13 15:16:01,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 15:16:01,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 15:16:01,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33053,1689261355495-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 15:16:01,060 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33053,1689261355495-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 15:16:01,069 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:01,069 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:01,070 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 15:16:01,074 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:01,084 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 15:16:01,116 DEBUG [Listener at localhost/37749] zookeeper.ReadOnlyZKClient(139): Connect 0x3e4d79c0 to 127.0.0.1:52275 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:01,124 DEBUG [Listener at localhost/37749] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@326b7986, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:01,144 DEBUG [hconnection-0x497c82a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:01,158 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45134, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:01,172 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:16:01,174 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:01,188 DEBUG [Listener at localhost/37749] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 15:16:01,191 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50614, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 15:16:01,205 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 15:16:01,205 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:01,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-13 15:16:01,212 DEBUG [Listener at localhost/37749] zookeeper.ReadOnlyZKClient(139): Connect 0x13aa6d9f to 127.0.0.1:52275 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:01,219 DEBUG [Listener at localhost/37749] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30296f6d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:01,219 INFO [Listener at localhost/37749] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:52275 2023-07-13 15:16:01,225 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:01,225 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015f41312f000a connected 2023-07-13 15:16:01,257 INFO [Listener at localhost/37749] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=424, OpenFileDescriptor=678, MaxFileDescriptor=60000, SystemLoadAverage=484, ProcessCount=172, AvailableMemoryMB=5348 2023-07-13 15:16:01,260 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-13 15:16:01,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:01,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:01,335 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 15:16:01,353 INFO [Listener at localhost/37749] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:01,353 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:01,354 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:01,354 INFO [Listener at localhost/37749] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:01,354 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:01,354 INFO [Listener at localhost/37749] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:01,354 INFO [Listener at localhost/37749] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:01,358 INFO [Listener at localhost/37749] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34377 2023-07-13 15:16:01,359 INFO [Listener at localhost/37749] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:01,361 DEBUG [Listener at localhost/37749] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:01,363 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:01,366 INFO [Listener at localhost/37749] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:01,369 INFO [Listener at localhost/37749] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34377 connecting to ZooKeeper ensemble=127.0.0.1:52275 2023-07-13 15:16:01,373 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:343770x0, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:01,379 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34377-0x1015f41312f000b connected 2023-07-13 15:16:01,386 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(162): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:01,387 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(162): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 15:16:01,388 DEBUG [Listener at localhost/37749] zookeeper.ZKUtil(164): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:01,395 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34377 2023-07-13 15:16:01,398 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34377 2023-07-13 15:16:01,399 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34377 2023-07-13 15:16:01,400 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34377 2023-07-13 15:16:01,402 DEBUG [Listener at localhost/37749] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34377 2023-07-13 15:16:01,405 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:01,406 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:01,406 INFO [Listener at localhost/37749] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:01,407 INFO [Listener at localhost/37749] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:01,407 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:01,407 INFO [Listener at localhost/37749] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:01,407 INFO [Listener at localhost/37749] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:01,408 INFO [Listener at localhost/37749] http.HttpServer(1146): Jetty bound to port 44651 2023-07-13 15:16:01,408 INFO [Listener at localhost/37749] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:01,427 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:01,427 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@593950e0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:01,428 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:01,428 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b0143bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:01,559 INFO [Listener at localhost/37749] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:01,560 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:01,560 INFO [Listener at localhost/37749] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:01,561 INFO [Listener at localhost/37749] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:01,561 INFO [Listener at localhost/37749] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:01,563 INFO [Listener at localhost/37749] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@45ea4e7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/java.io.tmpdir/jetty-0_0_0_0-44651-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8447153226094830195/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:01,564 INFO [Listener at localhost/37749] server.AbstractConnector(333): Started ServerConnector@53265acb{HTTP/1.1, (http/1.1)}{0.0.0.0:44651} 2023-07-13 15:16:01,564 INFO [Listener at localhost/37749] server.Server(415): Started @11645ms 2023-07-13 15:16:01,568 INFO [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(951): ClusterId : 21793049-9caa-430c-8afc-71847c49302d 2023-07-13 15:16:01,568 DEBUG [RS:3;jenkins-hbase4:34377] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:01,571 DEBUG [RS:3;jenkins-hbase4:34377] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:01,571 DEBUG [RS:3;jenkins-hbase4:34377] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:01,573 DEBUG [RS:3;jenkins-hbase4:34377] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:01,575 DEBUG [RS:3;jenkins-hbase4:34377] zookeeper.ReadOnlyZKClient(139): Connect 0x26d6c895 to 127.0.0.1:52275 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:01,594 DEBUG [RS:3;jenkins-hbase4:34377] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2811323f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:01,595 DEBUG [RS:3;jenkins-hbase4:34377] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c2ea3e5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:01,607 DEBUG [RS:3;jenkins-hbase4:34377] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:34377 2023-07-13 15:16:01,607 INFO [RS:3;jenkins-hbase4:34377] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:01,607 INFO [RS:3;jenkins-hbase4:34377] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:01,607 DEBUG [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:01,608 INFO [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33053,1689261355495 with isa=jenkins-hbase4.apache.org/172.31.14.131:34377, startcode=1689261361353 2023-07-13 15:16:01,609 DEBUG [RS:3;jenkins-hbase4:34377] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:01,616 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60699, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:01,616 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33053] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:01,616 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:01,619 DEBUG [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536 2023-07-13 15:16:01,619 DEBUG [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37375 2023-07-13 15:16:01,619 DEBUG [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40719 2023-07-13 15:16:01,626 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:01,626 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:01,626 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:01,626 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:01,626 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:01,627 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34377,1689261361353] 2023-07-13 15:16:01,627 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:01,627 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:01,627 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:01,627 DEBUG [RS:3;jenkins-hbase4:34377] zookeeper.ZKUtil(162): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:01,628 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:01,628 WARN [RS:3;jenkins-hbase4:34377] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:01,628 INFO [RS:3;jenkins-hbase4:34377] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:01,631 DEBUG [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:01,640 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33053,1689261355495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 15:16:01,641 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:01,641 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:01,641 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:01,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:01,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:01,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:01,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:01,644 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:01,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:01,652 DEBUG [RS:3;jenkins-hbase4:34377] zookeeper.ZKUtil(162): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:01,653 DEBUG [RS:3;jenkins-hbase4:34377] zookeeper.ZKUtil(162): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:01,653 DEBUG [RS:3;jenkins-hbase4:34377] zookeeper.ZKUtil(162): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:01,654 DEBUG [RS:3;jenkins-hbase4:34377] zookeeper.ZKUtil(162): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:01,655 DEBUG [RS:3;jenkins-hbase4:34377] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:01,656 INFO [RS:3;jenkins-hbase4:34377] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:01,662 INFO [RS:3;jenkins-hbase4:34377] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:01,663 INFO [RS:3;jenkins-hbase4:34377] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:01,663 INFO [RS:3;jenkins-hbase4:34377] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:01,665 INFO [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:01,672 INFO [RS:3;jenkins-hbase4:34377] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:01,672 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:01,672 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:01,673 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:01,673 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:01,673 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:01,673 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:01,673 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:01,673 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:01,673 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:01,673 DEBUG [RS:3;jenkins-hbase4:34377] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:01,681 INFO [RS:3;jenkins-hbase4:34377] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:01,681 INFO [RS:3;jenkins-hbase4:34377] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:01,681 INFO [RS:3;jenkins-hbase4:34377] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:01,694 INFO [RS:3;jenkins-hbase4:34377] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:01,694 INFO [RS:3;jenkins-hbase4:34377] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34377,1689261361353-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:01,708 INFO [RS:3;jenkins-hbase4:34377] regionserver.Replication(203): jenkins-hbase4.apache.org,34377,1689261361353 started 2023-07-13 15:16:01,708 INFO [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34377,1689261361353, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34377, sessionid=0x1015f41312f000b 2023-07-13 15:16:01,708 DEBUG [RS:3;jenkins-hbase4:34377] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:01,708 DEBUG [RS:3;jenkins-hbase4:34377] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:01,708 DEBUG [RS:3;jenkins-hbase4:34377] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34377,1689261361353' 2023-07-13 15:16:01,708 DEBUG [RS:3;jenkins-hbase4:34377] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:01,710 DEBUG [RS:3;jenkins-hbase4:34377] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:01,711 DEBUG [RS:3;jenkins-hbase4:34377] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:01,711 DEBUG [RS:3;jenkins-hbase4:34377] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:01,711 DEBUG [RS:3;jenkins-hbase4:34377] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:01,711 DEBUG [RS:3;jenkins-hbase4:34377] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34377,1689261361353' 2023-07-13 15:16:01,711 DEBUG [RS:3;jenkins-hbase4:34377] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:01,712 DEBUG [RS:3;jenkins-hbase4:34377] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:01,712 DEBUG [RS:3;jenkins-hbase4:34377] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:01,713 INFO [RS:3;jenkins-hbase4:34377] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:01,713 INFO [RS:3;jenkins-hbase4:34377] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:01,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:01,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:01,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:01,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:01,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:01,731 DEBUG [hconnection-0x120ad869-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:01,736 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45148, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:01,744 DEBUG [hconnection-0x120ad869-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:01,747 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38932, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:01,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:01,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:01,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:01,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:01,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:50614 deadline: 1689262561761, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:01,764 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:01,766 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:01,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:01,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:01,769 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:01,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:01,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:01,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:01,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:01,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:01,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:01,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:01,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:01,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:01,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:01,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:01,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:01,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377] to rsgroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:01,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:01,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:01,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:01,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:01,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(238): Moving server region 24214add90ee9cbdd631baadba96052d, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:01,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=24214add90ee9cbdd631baadba96052d, REOPEN/MOVE 2023-07-13 15:16:01,816 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=24214add90ee9cbdd631baadba96052d, REOPEN/MOVE 2023-07-13 15:16:01,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 15:16:01,818 INFO [RS:3;jenkins-hbase4:34377] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34377%2C1689261361353, suffix=, logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,34377,1689261361353, archiveDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs, maxLogs=32 2023-07-13 15:16:01,821 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=24214add90ee9cbdd631baadba96052d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:01,821 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261361821"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261361821"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261361821"}]},"ts":"1689261361821"} 2023-07-13 15:16:01,824 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 24214add90ee9cbdd631baadba96052d, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:01,864 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK] 2023-07-13 15:16:01,865 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK] 2023-07-13 15:16:01,865 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK] 2023-07-13 15:16:01,880 INFO [RS:3;jenkins-hbase4:34377] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,34377,1689261361353/jenkins-hbase4.apache.org%2C34377%2C1689261361353.1689261361819 2023-07-13 15:16:01,880 DEBUG [RS:3;jenkins-hbase4:34377] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK], DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK], DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK]] 2023-07-13 15:16:01,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:01,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 24214add90ee9cbdd631baadba96052d, disabling compactions & flushes 2023-07-13 15:16:01,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:01,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:01,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. after waiting 0 ms 2023-07-13 15:16:01,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:01,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 24214add90ee9cbdd631baadba96052d 1/1 column families, dataSize=1.38 KB heapSize=2.35 KB 2023-07-13 15:16:02,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/.tmp/m/e83d42d875ef413ab660e62ee060b38e 2023-07-13 15:16:02,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/.tmp/m/e83d42d875ef413ab660e62ee060b38e as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/m/e83d42d875ef413ab660e62ee060b38e 2023-07-13 15:16:02,202 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/m/e83d42d875ef413ab660e62ee060b38e, entries=3, sequenceid=9, filesize=5.2 K 2023-07-13 15:16:02,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1410, heapSize ~2.34 KB/2392, currentSize=0 B/0 for 24214add90ee9cbdd631baadba96052d in 214ms, sequenceid=9, compaction requested=false 2023-07-13 15:16:02,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 15:16:02,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-13 15:16:02,239 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:02,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:02,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 24214add90ee9cbdd631baadba96052d: 2023-07-13 15:16:02,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 24214add90ee9cbdd631baadba96052d move to jenkins-hbase4.apache.org,44089,1689261357555 record at close sequenceid=9 2023-07-13 15:16:02,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:02,248 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=24214add90ee9cbdd631baadba96052d, regionState=CLOSED 2023-07-13 15:16:02,249 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261362248"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261362248"}]},"ts":"1689261362248"} 2023-07-13 15:16:02,256 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-13 15:16:02,256 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 24214add90ee9cbdd631baadba96052d, server=jenkins-hbase4.apache.org,32995,1689261357367 in 427 msec 2023-07-13 15:16:02,257 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=24214add90ee9cbdd631baadba96052d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:02,408 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:02,409 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=24214add90ee9cbdd631baadba96052d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:02,409 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261362408"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261362408"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261362408"}]},"ts":"1689261362408"} 2023-07-13 15:16:02,412 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 24214add90ee9cbdd631baadba96052d, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:02,567 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:02,567 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:02,571 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37802, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:02,576 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:02,576 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 24214add90ee9cbdd631baadba96052d, NAME => 'hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:02,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:02,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. service=MultiRowMutationService 2023-07-13 15:16:02,577 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 15:16:02,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:02,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:02,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:02,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:02,579 INFO [StoreOpener-24214add90ee9cbdd631baadba96052d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:02,580 DEBUG [StoreOpener-24214add90ee9cbdd631baadba96052d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/m 2023-07-13 15:16:02,580 DEBUG [StoreOpener-24214add90ee9cbdd631baadba96052d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/m 2023-07-13 15:16:02,581 INFO [StoreOpener-24214add90ee9cbdd631baadba96052d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 24214add90ee9cbdd631baadba96052d columnFamilyName m 2023-07-13 15:16:02,595 DEBUG [StoreOpener-24214add90ee9cbdd631baadba96052d-1] regionserver.HStore(539): loaded hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/m/e83d42d875ef413ab660e62ee060b38e 2023-07-13 15:16:02,596 INFO [StoreOpener-24214add90ee9cbdd631baadba96052d-1] regionserver.HStore(310): Store=24214add90ee9cbdd631baadba96052d/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:02,597 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:02,600 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:02,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:02,606 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 24214add90ee9cbdd631baadba96052d; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@146edeab, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:02,606 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 24214add90ee9cbdd631baadba96052d: 2023-07-13 15:16:02,610 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d., pid=14, masterSystemTime=1689261362567 2023-07-13 15:16:02,615 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:02,616 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:02,617 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=24214add90ee9cbdd631baadba96052d, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:02,617 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261362617"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261362617"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261362617"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261362617"}]},"ts":"1689261362617"} 2023-07-13 15:16:02,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-13 15:16:02,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 24214add90ee9cbdd631baadba96052d, server=jenkins-hbase4.apache.org,44089,1689261357555 in 209 msec 2023-07-13 15:16:02,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=24214add90ee9cbdd631baadba96052d, REOPEN/MOVE in 811 msec 2023-07-13 15:16:02,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-13 15:16:02,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353] are moved back to default 2023-07-13 15:16:02,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:02,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:02,820 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32995] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:38932 deadline: 1689261422820, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44089 startCode=1689261357555. As of locationSeqNum=9. 2023-07-13 15:16:02,926 DEBUG [hconnection-0x120ad869-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:02,932 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37816, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:02,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:02,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:02,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:02,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:02,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:02,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:02,971 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:02,974 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32995] ipc.CallRunner(144): callId: 43 service: ClientService methodName: ExecService size: 617 connection: 172.31.14.131:38930 deadline: 1689261422974, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44089 startCode=1689261357555. As of locationSeqNum=9. 2023-07-13 15:16:02,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-13 15:16:02,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 15:16:03,079 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:03,083 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37830, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:03,087 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:03,088 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:03,088 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:03,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 15:16:03,092 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:03,099 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:03,107 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:03,111 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd empty. 2023-07-13 15:16:03,111 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:03,111 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:03,115 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:03,115 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5 empty. 2023-07-13 15:16:03,115 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:03,115 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8 empty. 2023-07-13 15:16:03,116 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:03,118 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2 empty. 2023-07-13 15:16:03,119 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:03,119 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:03,119 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:03,120 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532 empty. 2023-07-13 15:16:03,121 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:03,121 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 15:16:03,175 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:03,190 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => c3d4810d727b59e7c21e0a7b9d6f54cd, NAME => 'Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:03,191 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 3c925ad775000ce1325a3996abbf89e5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:03,194 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9bac41a54f2c9595fd1e1efdb78b39a8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:03,284 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:03,284 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 9bac41a54f2c9595fd1e1efdb78b39a8, disabling compactions & flushes 2023-07-13 15:16:03,285 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:03,285 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:03,285 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. after waiting 0 ms 2023-07-13 15:16:03,285 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:03,285 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:03,285 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 9bac41a54f2c9595fd1e1efdb78b39a8: 2023-07-13 15:16:03,286 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 97e606e1ec92bfb6ab11692abe9896c2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:03,292 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:03,293 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 3c925ad775000ce1325a3996abbf89e5, disabling compactions & flushes 2023-07-13 15:16:03,293 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:03,293 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:03,293 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. after waiting 0 ms 2023-07-13 15:16:03,293 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:03,293 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:03,293 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 3c925ad775000ce1325a3996abbf89e5: 2023-07-13 15:16:03,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 15:16:03,294 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => c142dc7ed03b0397dcb6a04587d3d532, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:03,346 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:03,346 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing c142dc7ed03b0397dcb6a04587d3d532, disabling compactions & flushes 2023-07-13 15:16:03,346 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:03,346 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:03,347 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 97e606e1ec92bfb6ab11692abe9896c2, disabling compactions & flushes 2023-07-13 15:16:03,347 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:03,347 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:03,347 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. after waiting 0 ms 2023-07-13 15:16:03,347 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:03,347 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:03,347 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 97e606e1ec92bfb6ab11692abe9896c2: 2023-07-13 15:16:03,347 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:03,347 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. after waiting 0 ms 2023-07-13 15:16:03,347 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:03,347 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:03,348 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for c142dc7ed03b0397dcb6a04587d3d532: 2023-07-13 15:16:03,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 15:16:03,686 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:03,686 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing c3d4810d727b59e7c21e0a7b9d6f54cd, disabling compactions & flushes 2023-07-13 15:16:03,686 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:03,686 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:03,686 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. after waiting 0 ms 2023-07-13 15:16:03,686 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:03,686 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:03,687 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for c3d4810d727b59e7c21e0a7b9d6f54cd: 2023-07-13 15:16:03,691 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:03,693 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261363692"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261363692"}]},"ts":"1689261363692"} 2023-07-13 15:16:03,693 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261363692"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261363692"}]},"ts":"1689261363692"} 2023-07-13 15:16:03,693 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261363692"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261363692"}]},"ts":"1689261363692"} 2023-07-13 15:16:03,693 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261363692"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261363692"}]},"ts":"1689261363692"} 2023-07-13 15:16:03,693 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261363692"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261363692"}]},"ts":"1689261363692"} 2023-07-13 15:16:03,757 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-13 15:16:03,758 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:03,758 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261363758"}]},"ts":"1689261363758"} 2023-07-13 15:16:03,760 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-13 15:16:03,774 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:03,774 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:03,775 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:03,775 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:03,775 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, ASSIGN}] 2023-07-13 15:16:03,785 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, ASSIGN 2023-07-13 15:16:03,786 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, ASSIGN 2023-07-13 15:16:03,788 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, ASSIGN 2023-07-13 15:16:03,788 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, ASSIGN 2023-07-13 15:16:03,791 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:03,792 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, ASSIGN 2023-07-13 15:16:03,792 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:03,792 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:03,792 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:03,794 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:03,942 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 15:16:03,945 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=97e606e1ec92bfb6ab11692abe9896c2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:03,945 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=c3d4810d727b59e7c21e0a7b9d6f54cd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:03,945 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=c142dc7ed03b0397dcb6a04587d3d532, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:03,945 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261363945"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261363945"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261363945"}]},"ts":"1689261363945"} 2023-07-13 15:16:03,945 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=3c925ad775000ce1325a3996abbf89e5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:03,945 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=9bac41a54f2c9595fd1e1efdb78b39a8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:03,945 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261363945"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261363945"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261363945"}]},"ts":"1689261363945"} 2023-07-13 15:16:03,945 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261363945"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261363945"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261363945"}]},"ts":"1689261363945"} 2023-07-13 15:16:03,946 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261363945"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261363945"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261363945"}]},"ts":"1689261363945"} 2023-07-13 15:16:03,946 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261363945"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261363945"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261363945"}]},"ts":"1689261363945"} 2023-07-13 15:16:03,950 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=19, state=RUNNABLE; OpenRegionProcedure 97e606e1ec92bfb6ab11692abe9896c2, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:03,952 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=20, state=RUNNABLE; OpenRegionProcedure c142dc7ed03b0397dcb6a04587d3d532, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:03,954 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=16, state=RUNNABLE; OpenRegionProcedure c3d4810d727b59e7c21e0a7b9d6f54cd, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:03,956 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=18, state=RUNNABLE; OpenRegionProcedure 9bac41a54f2c9595fd1e1efdb78b39a8, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:03,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=17, state=RUNNABLE; OpenRegionProcedure 3c925ad775000ce1325a3996abbf89e5, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:04,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 15:16:04,112 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:04,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 97e606e1ec92bfb6ab11692abe9896c2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 15:16:04,112 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:04,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:04,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c142dc7ed03b0397dcb6a04587d3d532, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 15:16:04,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:04,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:04,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:04,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:04,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:04,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:04,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:04,116 INFO [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:04,116 INFO [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:04,118 DEBUG [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/f 2023-07-13 15:16:04,118 DEBUG [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/f 2023-07-13 15:16:04,118 DEBUG [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/f 2023-07-13 15:16:04,118 DEBUG [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/f 2023-07-13 15:16:04,119 INFO [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 97e606e1ec92bfb6ab11692abe9896c2 columnFamilyName f 2023-07-13 15:16:04,119 INFO [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c142dc7ed03b0397dcb6a04587d3d532 columnFamilyName f 2023-07-13 15:16:04,120 INFO [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] regionserver.HStore(310): Store=97e606e1ec92bfb6ab11692abe9896c2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:04,121 INFO [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] regionserver.HStore(310): Store=c142dc7ed03b0397dcb6a04587d3d532/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:04,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:04,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:04,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:04,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:04,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:04,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:04,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:04,139 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c142dc7ed03b0397dcb6a04587d3d532; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9698778720, jitterRate=-0.09673084318637848}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:04,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c142dc7ed03b0397dcb6a04587d3d532: 2023-07-13 15:16:04,140 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532., pid=22, masterSystemTime=1689261364106 2023-07-13 15:16:04,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:04,143 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:04,143 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:04,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3c925ad775000ce1325a3996abbf89e5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 15:16:04,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:04,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:04,144 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=c142dc7ed03b0397dcb6a04587d3d532, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:04,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:04,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:04,144 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261364144"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261364144"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261364144"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261364144"}]},"ts":"1689261364144"} 2023-07-13 15:16:04,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:04,146 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 97e606e1ec92bfb6ab11692abe9896c2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11435285280, jitterRate=0.06499393284320831}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:04,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 97e606e1ec92bfb6ab11692abe9896c2: 2023-07-13 15:16:04,147 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2., pid=21, masterSystemTime=1689261364106 2023-07-13 15:16:04,148 INFO [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:04,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:04,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:04,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:04,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9bac41a54f2c9595fd1e1efdb78b39a8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 15:16:04,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:04,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:04,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:04,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:04,152 DEBUG [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/f 2023-07-13 15:16:04,152 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=97e606e1ec92bfb6ab11692abe9896c2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:04,153 DEBUG [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/f 2023-07-13 15:16:04,153 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261364152"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261364152"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261364152"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261364152"}]},"ts":"1689261364152"} 2023-07-13 15:16:04,155 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=20 2023-07-13 15:16:04,155 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=20, state=SUCCESS; OpenRegionProcedure c142dc7ed03b0397dcb6a04587d3d532, server=jenkins-hbase4.apache.org,44089,1689261357555 in 198 msec 2023-07-13 15:16:04,155 INFO [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3c925ad775000ce1325a3996abbf89e5 columnFamilyName f 2023-07-13 15:16:04,158 INFO [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:04,159 INFO [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] regionserver.HStore(310): Store=3c925ad775000ce1325a3996abbf89e5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:04,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, ASSIGN in 380 msec 2023-07-13 15:16:04,162 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=19 2023-07-13 15:16:04,162 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=19, state=SUCCESS; OpenRegionProcedure 97e606e1ec92bfb6ab11692abe9896c2, server=jenkins-hbase4.apache.org,40971,1689261357748 in 208 msec 2023-07-13 15:16:04,162 DEBUG [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/f 2023-07-13 15:16:04,162 DEBUG [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/f 2023-07-13 15:16:04,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:04,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:04,163 INFO [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9bac41a54f2c9595fd1e1efdb78b39a8 columnFamilyName f 2023-07-13 15:16:04,164 INFO [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] regionserver.HStore(310): Store=9bac41a54f2c9595fd1e1efdb78b39a8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:04,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:04,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, ASSIGN in 387 msec 2023-07-13 15:16:04,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:04,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:04,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:04,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:04,176 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3c925ad775000ce1325a3996abbf89e5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11314560480, jitterRate=0.05375055968761444}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:04,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3c925ad775000ce1325a3996abbf89e5: 2023-07-13 15:16:04,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5., pid=25, masterSystemTime=1689261364106 2023-07-13 15:16:04,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:04,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:04,185 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=3c925ad775000ce1325a3996abbf89e5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:04,186 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261364185"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261364185"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261364185"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261364185"}]},"ts":"1689261364185"} 2023-07-13 15:16:04,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:04,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9bac41a54f2c9595fd1e1efdb78b39a8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10048128960, jitterRate=-0.06419506669044495}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:04,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9bac41a54f2c9595fd1e1efdb78b39a8: 2023-07-13 15:16:04,188 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8., pid=24, masterSystemTime=1689261364106 2023-07-13 15:16:04,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:04,190 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:04,190 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:04,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c3d4810d727b59e7c21e0a7b9d6f54cd, NAME => 'Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 15:16:04,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:04,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:04,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:04,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:04,192 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=17 2023-07-13 15:16:04,193 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=9bac41a54f2c9595fd1e1efdb78b39a8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:04,193 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=17, state=SUCCESS; OpenRegionProcedure 3c925ad775000ce1325a3996abbf89e5, server=jenkins-hbase4.apache.org,44089,1689261357555 in 227 msec 2023-07-13 15:16:04,194 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261364192"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261364192"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261364192"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261364192"}]},"ts":"1689261364192"} 2023-07-13 15:16:04,198 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, ASSIGN in 418 msec 2023-07-13 15:16:04,199 INFO [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:04,202 DEBUG [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/f 2023-07-13 15:16:04,202 DEBUG [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/f 2023-07-13 15:16:04,203 INFO [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c3d4810d727b59e7c21e0a7b9d6f54cd columnFamilyName f 2023-07-13 15:16:04,203 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=18 2023-07-13 15:16:04,203 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=18, state=SUCCESS; OpenRegionProcedure 9bac41a54f2c9595fd1e1efdb78b39a8, server=jenkins-hbase4.apache.org,40971,1689261357748 in 240 msec 2023-07-13 15:16:04,204 INFO [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] regionserver.HStore(310): Store=c3d4810d727b59e7c21e0a7b9d6f54cd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:04,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:04,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:04,207 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, ASSIGN in 428 msec 2023-07-13 15:16:04,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:04,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:04,228 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c3d4810d727b59e7c21e0a7b9d6f54cd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10438603040, jitterRate=-0.027829334139823914}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:04,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c3d4810d727b59e7c21e0a7b9d6f54cd: 2023-07-13 15:16:04,229 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd., pid=23, masterSystemTime=1689261364106 2023-07-13 15:16:04,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:04,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:04,233 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=c3d4810d727b59e7c21e0a7b9d6f54cd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:04,233 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261364233"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261364233"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261364233"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261364233"}]},"ts":"1689261364233"} 2023-07-13 15:16:04,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=16 2023-07-13 15:16:04,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=16, state=SUCCESS; OpenRegionProcedure c3d4810d727b59e7c21e0a7b9d6f54cd, server=jenkins-hbase4.apache.org,40971,1689261357748 in 282 msec 2023-07-13 15:16:04,249 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-13 15:16:04,250 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, ASSIGN in 464 msec 2023-07-13 15:16:04,255 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:04,255 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261364255"}]},"ts":"1689261364255"} 2023-07-13 15:16:04,258 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-13 15:16:04,261 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:04,264 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.2950 sec 2023-07-13 15:16:05,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-13 15:16:05,100 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-13 15:16:05,101 DEBUG [Listener at localhost/37749] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-13 15:16:05,102 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:05,119 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-13 15:16:05,119 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:05,120 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-13 15:16:05,120 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:05,124 DEBUG [Listener at localhost/37749] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:05,128 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38938, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:05,134 DEBUG [Listener at localhost/37749] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:05,147 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50902, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:05,151 DEBUG [Listener at localhost/37749] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:05,166 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45150, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:05,168 DEBUG [Listener at localhost/37749] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:05,171 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37846, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:05,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:05,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:05,187 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:05,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:05,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:05,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:05,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:05,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:05,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:05,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region c3d4810d727b59e7c21e0a7b9d6f54cd to RSGroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:05,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:05,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:05,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:05,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:05,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:05,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, REOPEN/MOVE 2023-07-13 15:16:05,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region 3c925ad775000ce1325a3996abbf89e5 to RSGroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:05,214 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, REOPEN/MOVE 2023-07-13 15:16:05,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:05,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:05,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:05,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:05,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:05,217 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=c3d4810d727b59e7c21e0a7b9d6f54cd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:05,217 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261365217"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365217"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365217"}]},"ts":"1689261365217"} 2023-07-13 15:16:05,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, REOPEN/MOVE 2023-07-13 15:16:05,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region 9bac41a54f2c9595fd1e1efdb78b39a8 to RSGroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:05,219 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, REOPEN/MOVE 2023-07-13 15:16:05,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:05,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:05,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:05,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:05,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:05,228 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=3c925ad775000ce1325a3996abbf89e5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:05,229 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365228"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365228"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365228"}]},"ts":"1689261365228"} 2023-07-13 15:16:05,230 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure c3d4810d727b59e7c21e0a7b9d6f54cd, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:05,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, REOPEN/MOVE 2023-07-13 15:16:05,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region 97e606e1ec92bfb6ab11692abe9896c2 to RSGroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:05,231 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, REOPEN/MOVE 2023-07-13 15:16:05,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:05,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:05,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:05,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:05,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:05,232 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 3c925ad775000ce1325a3996abbf89e5, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:05,233 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=9bac41a54f2c9595fd1e1efdb78b39a8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:05,234 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365233"}]},"ts":"1689261365233"} 2023-07-13 15:16:05,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, REOPEN/MOVE 2023-07-13 15:16:05,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region c142dc7ed03b0397dcb6a04587d3d532 to RSGroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:05,235 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, REOPEN/MOVE 2023-07-13 15:16:05,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:05,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:05,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:05,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:05,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:05,236 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=97e606e1ec92bfb6ab11692abe9896c2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:05,236 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365236"}]},"ts":"1689261365236"} 2023-07-13 15:16:05,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, REOPEN/MOVE 2023-07-13 15:16:05,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_44995158, current retry=0 2023-07-13 15:16:05,239 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, REOPEN/MOVE 2023-07-13 15:16:05,240 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure 9bac41a54f2c9595fd1e1efdb78b39a8, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:05,241 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=c142dc7ed03b0397dcb6a04587d3d532, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:05,241 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261365241"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365241"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365241"}]},"ts":"1689261365241"} 2023-07-13 15:16:05,241 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 97e606e1ec92bfb6ab11692abe9896c2, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:05,243 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure c142dc7ed03b0397dcb6a04587d3d532, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:05,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:05,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9bac41a54f2c9595fd1e1efdb78b39a8, disabling compactions & flushes 2023-07-13 15:16:05,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:05,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:05,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:05,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c142dc7ed03b0397dcb6a04587d3d532, disabling compactions & flushes 2023-07-13 15:16:05,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. after waiting 0 ms 2023-07-13 15:16:05,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:05,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:05,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:05,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. after waiting 0 ms 2023-07-13 15:16:05,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:05,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:05,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:05,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:05,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c142dc7ed03b0397dcb6a04587d3d532: 2023-07-13 15:16:05,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c142dc7ed03b0397dcb6a04587d3d532 move to jenkins-hbase4.apache.org,34377,1689261361353 record at close sequenceid=2 2023-07-13 15:16:05,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:05,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9bac41a54f2c9595fd1e1efdb78b39a8: 2023-07-13 15:16:05,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9bac41a54f2c9595fd1e1efdb78b39a8 move to jenkins-hbase4.apache.org,32995,1689261357367 record at close sequenceid=2 2023-07-13 15:16:05,449 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:05,449 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:05,450 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3c925ad775000ce1325a3996abbf89e5, disabling compactions & flushes 2023-07-13 15:16:05,450 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:05,450 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:05,450 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. after waiting 0 ms 2023-07-13 15:16:05,450 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:05,451 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=c142dc7ed03b0397dcb6a04587d3d532, regionState=CLOSED 2023-07-13 15:16:05,451 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261365451"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261365451"}]},"ts":"1689261365451"} 2023-07-13 15:16:05,451 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:05,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:05,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c3d4810d727b59e7c21e0a7b9d6f54cd, disabling compactions & flushes 2023-07-13 15:16:05,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:05,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:05,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. after waiting 0 ms 2023-07-13 15:16:05,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:05,453 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=9bac41a54f2c9595fd1e1efdb78b39a8, regionState=CLOSED 2023-07-13 15:16:05,453 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365453"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261365453"}]},"ts":"1689261365453"} 2023-07-13 15:16:05,460 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-13 15:16:05,460 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure c142dc7ed03b0397dcb6a04587d3d532, server=jenkins-hbase4.apache.org,44089,1689261357555 in 213 msec 2023-07-13 15:16:05,461 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-13 15:16:05,461 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure 9bac41a54f2c9595fd1e1efdb78b39a8, server=jenkins-hbase4.apache.org,40971,1689261357748 in 218 msec 2023-07-13 15:16:05,461 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34377,1689261361353; forceNewPlan=false, retain=false 2023-07-13 15:16:05,462 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,32995,1689261357367; forceNewPlan=false, retain=false 2023-07-13 15:16:05,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:05,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:05,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:05,469 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3c925ad775000ce1325a3996abbf89e5: 2023-07-13 15:16:05,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 3c925ad775000ce1325a3996abbf89e5 move to jenkins-hbase4.apache.org,32995,1689261357367 record at close sequenceid=2 2023-07-13 15:16:05,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:05,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c3d4810d727b59e7c21e0a7b9d6f54cd: 2023-07-13 15:16:05,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c3d4810d727b59e7c21e0a7b9d6f54cd move to jenkins-hbase4.apache.org,32995,1689261357367 record at close sequenceid=2 2023-07-13 15:16:05,472 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:05,473 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=3c925ad775000ce1325a3996abbf89e5, regionState=CLOSED 2023-07-13 15:16:05,474 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365473"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261365473"}]},"ts":"1689261365473"} 2023-07-13 15:16:05,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:05,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:05,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 97e606e1ec92bfb6ab11692abe9896c2, disabling compactions & flushes 2023-07-13 15:16:05,475 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:05,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:05,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. after waiting 0 ms 2023-07-13 15:16:05,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:05,475 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=c3d4810d727b59e7c21e0a7b9d6f54cd, regionState=CLOSED 2023-07-13 15:16:05,475 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261365475"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261365475"}]},"ts":"1689261365475"} 2023-07-13 15:16:05,480 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-13 15:16:05,480 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 3c925ad775000ce1325a3996abbf89e5, server=jenkins-hbase4.apache.org,44089,1689261357555 in 244 msec 2023-07-13 15:16:05,481 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,32995,1689261357367; forceNewPlan=false, retain=false 2023-07-13 15:16:05,482 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-13 15:16:05,482 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure c3d4810d727b59e7c21e0a7b9d6f54cd, server=jenkins-hbase4.apache.org,40971,1689261357748 in 249 msec 2023-07-13 15:16:05,484 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,32995,1689261357367; forceNewPlan=false, retain=false 2023-07-13 15:16:05,484 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:05,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:05,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 97e606e1ec92bfb6ab11692abe9896c2: 2023-07-13 15:16:05,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 97e606e1ec92bfb6ab11692abe9896c2 move to jenkins-hbase4.apache.org,34377,1689261361353 record at close sequenceid=2 2023-07-13 15:16:05,492 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:05,493 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=97e606e1ec92bfb6ab11692abe9896c2, regionState=CLOSED 2023-07-13 15:16:05,493 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365493"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261365493"}]},"ts":"1689261365493"} 2023-07-13 15:16:05,497 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-13 15:16:05,498 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 97e606e1ec92bfb6ab11692abe9896c2, server=jenkins-hbase4.apache.org,40971,1689261357748 in 254 msec 2023-07-13 15:16:05,498 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34377,1689261361353; forceNewPlan=false, retain=false 2023-07-13 15:16:05,612 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 15:16:05,612 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=c142dc7ed03b0397dcb6a04587d3d532, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:05,613 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261365612"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365612"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365612"}]},"ts":"1689261365612"} 2023-07-13 15:16:05,613 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=97e606e1ec92bfb6ab11692abe9896c2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:05,613 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365613"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365613"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365613"}]},"ts":"1689261365613"} 2023-07-13 15:16:05,614 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=9bac41a54f2c9595fd1e1efdb78b39a8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:05,614 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=3c925ad775000ce1325a3996abbf89e5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:05,614 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365614"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365614"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365614"}]},"ts":"1689261365614"} 2023-07-13 15:16:05,614 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365614"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365614"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365614"}]},"ts":"1689261365614"} 2023-07-13 15:16:05,614 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=c3d4810d727b59e7c21e0a7b9d6f54cd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:05,614 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261365614"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261365614"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261365614"}]},"ts":"1689261365614"} 2023-07-13 15:16:05,616 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=32, state=RUNNABLE; OpenRegionProcedure c142dc7ed03b0397dcb6a04587d3d532, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:05,618 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=30, state=RUNNABLE; OpenRegionProcedure 97e606e1ec92bfb6ab11692abe9896c2, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:05,619 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=28, state=RUNNABLE; OpenRegionProcedure 9bac41a54f2c9595fd1e1efdb78b39a8, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:05,620 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=27, state=RUNNABLE; OpenRegionProcedure 3c925ad775000ce1325a3996abbf89e5, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:05,621 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=26, state=RUNNABLE; OpenRegionProcedure c3d4810d727b59e7c21e0a7b9d6f54cd, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:05,668 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:05,775 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:05,775 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:05,777 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50916, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:05,839 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:05,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 97e606e1ec92bfb6ab11692abe9896c2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 15:16:05,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:05,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:05,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:05,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:05,867 INFO [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:05,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:05,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9bac41a54f2c9595fd1e1efdb78b39a8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 15:16:05,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:05,891 DEBUG [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/f 2023-07-13 15:16:05,892 DEBUG [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/f 2023-07-13 15:16:05,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:05,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:05,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:05,893 INFO [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 97e606e1ec92bfb6ab11692abe9896c2 columnFamilyName f 2023-07-13 15:16:05,895 INFO [StoreOpener-97e606e1ec92bfb6ab11692abe9896c2-1] regionserver.HStore(310): Store=97e606e1ec92bfb6ab11692abe9896c2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:05,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:05,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:05,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:05,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 97e606e1ec92bfb6ab11692abe9896c2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11913825600, jitterRate=0.10956147313117981}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:05,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 97e606e1ec92bfb6ab11692abe9896c2: 2023-07-13 15:16:05,915 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 15:16:05,916 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 15:16:05,918 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:05,918 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-13 15:16:05,924 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2., pid=37, masterSystemTime=1689261365775 2023-07-13 15:16:05,939 INFO [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:05,939 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:05,939 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-13 15:16:05,939 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:05,939 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-13 15:16:05,941 DEBUG [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/f 2023-07-13 15:16:05,941 DEBUG [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/f 2023-07-13 15:16:05,941 INFO [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9bac41a54f2c9595fd1e1efdb78b39a8 columnFamilyName f 2023-07-13 15:16:05,947 INFO [StoreOpener-9bac41a54f2c9595fd1e1efdb78b39a8-1] regionserver.HStore(310): Store=9bac41a54f2c9595fd1e1efdb78b39a8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:05,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:05,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:05,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:05,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c142dc7ed03b0397dcb6a04587d3d532, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 15:16:05,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:05,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:05,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:05,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:05,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:05,951 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=97e606e1ec92bfb6ab11692abe9896c2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:05,952 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365951"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261365951"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261365951"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261365951"}]},"ts":"1689261365951"} 2023-07-13 15:16:05,955 INFO [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:05,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:05,957 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=30 2023-07-13 15:16:05,957 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=30, state=SUCCESS; OpenRegionProcedure 97e606e1ec92bfb6ab11692abe9896c2, server=jenkins-hbase4.apache.org,34377,1689261361353 in 337 msec 2023-07-13 15:16:05,957 DEBUG [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/f 2023-07-13 15:16:05,957 DEBUG [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/f 2023-07-13 15:16:05,958 INFO [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c142dc7ed03b0397dcb6a04587d3d532 columnFamilyName f 2023-07-13 15:16:05,959 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, REOPEN/MOVE in 725 msec 2023-07-13 15:16:05,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:05,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9bac41a54f2c9595fd1e1efdb78b39a8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10261908320, jitterRate=-0.04428531229496002}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:05,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9bac41a54f2c9595fd1e1efdb78b39a8: 2023-07-13 15:16:05,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8., pid=38, masterSystemTime=1689261365805 2023-07-13 15:16:05,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:05,965 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:05,965 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:05,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3c925ad775000ce1325a3996abbf89e5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 15:16:05,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:05,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:05,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:05,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:05,967 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=9bac41a54f2c9595fd1e1efdb78b39a8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:05,967 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261365966"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261365966"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261365966"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261365966"}]},"ts":"1689261365966"} 2023-07-13 15:16:05,971 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=28 2023-07-13 15:16:05,971 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=28, state=SUCCESS; OpenRegionProcedure 9bac41a54f2c9595fd1e1efdb78b39a8, server=jenkins-hbase4.apache.org,32995,1689261357367 in 350 msec 2023-07-13 15:16:05,974 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, REOPEN/MOVE in 745 msec 2023-07-13 15:16:05,975 INFO [StoreOpener-c142dc7ed03b0397dcb6a04587d3d532-1] regionserver.HStore(310): Store=c142dc7ed03b0397dcb6a04587d3d532/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:05,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:05,981 INFO [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:05,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:05,983 DEBUG [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/f 2023-07-13 15:16:05,983 DEBUG [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/f 2023-07-13 15:16:05,984 INFO [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3c925ad775000ce1325a3996abbf89e5 columnFamilyName f 2023-07-13 15:16:05,986 INFO [StoreOpener-3c925ad775000ce1325a3996abbf89e5-1] regionserver.HStore(310): Store=3c925ad775000ce1325a3996abbf89e5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:05,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:05,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:05,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c142dc7ed03b0397dcb6a04587d3d532; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10014856000, jitterRate=-0.06729385256767273}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:05,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c142dc7ed03b0397dcb6a04587d3d532: 2023-07-13 15:16:05,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:05,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532., pid=36, masterSystemTime=1689261365775 2023-07-13 15:16:05,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:05,994 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:05,995 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=c142dc7ed03b0397dcb6a04587d3d532, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:05,995 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261365995"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261365995"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261365995"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261365995"}]},"ts":"1689261365995"} 2023-07-13 15:16:05,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:05,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3c925ad775000ce1325a3996abbf89e5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10020746720, jitterRate=-0.06674523651599884}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:05,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3c925ad775000ce1325a3996abbf89e5: 2023-07-13 15:16:06,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5., pid=39, masterSystemTime=1689261365805 2023-07-13 15:16:06,002 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=32 2023-07-13 15:16:06,002 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=32, state=SUCCESS; OpenRegionProcedure c142dc7ed03b0397dcb6a04587d3d532, server=jenkins-hbase4.apache.org,34377,1689261361353 in 382 msec 2023-07-13 15:16:06,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:06,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:06,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:06,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c3d4810d727b59e7c21e0a7b9d6f54cd, NAME => 'Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 15:16:06,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:06,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,006 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=3c925ad775000ce1325a3996abbf89e5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:06,006 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, REOPEN/MOVE in 767 msec 2023-07-13 15:16:06,006 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366006"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261366006"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261366006"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261366006"}]},"ts":"1689261366006"} 2023-07-13 15:16:06,007 INFO [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,011 DEBUG [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/f 2023-07-13 15:16:06,011 DEBUG [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/f 2023-07-13 15:16:06,011 INFO [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c3d4810d727b59e7c21e0a7b9d6f54cd columnFamilyName f 2023-07-13 15:16:06,012 INFO [StoreOpener-c3d4810d727b59e7c21e0a7b9d6f54cd-1] regionserver.HStore(310): Store=c3d4810d727b59e7c21e0a7b9d6f54cd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:06,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,021 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=27 2023-07-13 15:16:06,021 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=27, state=SUCCESS; OpenRegionProcedure 3c925ad775000ce1325a3996abbf89e5, server=jenkins-hbase4.apache.org,32995,1689261357367 in 396 msec 2023-07-13 15:16:06,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,023 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c3d4810d727b59e7c21e0a7b9d6f54cd; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12050936960, jitterRate=0.12233096361160278}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:06,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c3d4810d727b59e7c21e0a7b9d6f54cd: 2023-07-13 15:16:06,024 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd., pid=40, masterSystemTime=1689261365805 2023-07-13 15:16:06,024 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, REOPEN/MOVE in 805 msec 2023-07-13 15:16:06,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:06,026 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:06,027 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=c3d4810d727b59e7c21e0a7b9d6f54cd, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:06,027 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261366027"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261366027"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261366027"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261366027"}]},"ts":"1689261366027"} 2023-07-13 15:16:06,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=26 2023-07-13 15:16:06,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=26, state=SUCCESS; OpenRegionProcedure c3d4810d727b59e7c21e0a7b9d6f54cd, server=jenkins-hbase4.apache.org,32995,1689261357367 in 409 msec 2023-07-13 15:16:06,035 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, REOPEN/MOVE in 821 msec 2023-07-13 15:16:06,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-13 15:16:06,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_44995158. 2023-07-13 15:16:06,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:06,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:06,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:06,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:06,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:06,249 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:06,257 INFO [Listener at localhost/37749] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:06,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:06,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:06,277 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261366277"}]},"ts":"1689261366277"} 2023-07-13 15:16:06,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-13 15:16:06,279 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-13 15:16:06,281 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-13 15:16:06,283 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, UNASSIGN}] 2023-07-13 15:16:06,285 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, UNASSIGN 2023-07-13 15:16:06,285 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, UNASSIGN 2023-07-13 15:16:06,285 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, UNASSIGN 2023-07-13 15:16:06,285 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, UNASSIGN 2023-07-13 15:16:06,286 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, UNASSIGN 2023-07-13 15:16:06,287 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=97e606e1ec92bfb6ab11692abe9896c2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:06,287 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=c142dc7ed03b0397dcb6a04587d3d532, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:06,287 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366287"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366287"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366287"}]},"ts":"1689261366287"} 2023-07-13 15:16:06,287 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=9bac41a54f2c9595fd1e1efdb78b39a8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:06,287 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261366287"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366287"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366287"}]},"ts":"1689261366287"} 2023-07-13 15:16:06,287 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366287"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366287"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366287"}]},"ts":"1689261366287"} 2023-07-13 15:16:06,288 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=3c925ad775000ce1325a3996abbf89e5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:06,288 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=c3d4810d727b59e7c21e0a7b9d6f54cd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:06,288 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366288"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366288"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366288"}]},"ts":"1689261366288"} 2023-07-13 15:16:06,288 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261366288"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366288"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366288"}]},"ts":"1689261366288"} 2023-07-13 15:16:06,289 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=45, state=RUNNABLE; CloseRegionProcedure 97e606e1ec92bfb6ab11692abe9896c2, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:06,290 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=44, state=RUNNABLE; CloseRegionProcedure 9bac41a54f2c9595fd1e1efdb78b39a8, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:06,291 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=46, state=RUNNABLE; CloseRegionProcedure c142dc7ed03b0397dcb6a04587d3d532, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:06,292 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=43, state=RUNNABLE; CloseRegionProcedure 3c925ad775000ce1325a3996abbf89e5, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:06,294 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=42, state=RUNNABLE; CloseRegionProcedure c3d4810d727b59e7c21e0a7b9d6f54cd, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:06,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-13 15:16:06,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,448 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c3d4810d727b59e7c21e0a7b9d6f54cd, disabling compactions & flushes 2023-07-13 15:16:06,448 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:06,448 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:06,448 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. after waiting 0 ms 2023-07-13 15:16:06,448 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:06,453 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:06,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c142dc7ed03b0397dcb6a04587d3d532, disabling compactions & flushes 2023-07-13 15:16:06,456 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:06,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:06,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. after waiting 0 ms 2023-07-13 15:16:06,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:06,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:06,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd. 2023-07-13 15:16:06,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c3d4810d727b59e7c21e0a7b9d6f54cd: 2023-07-13 15:16:06,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:06,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9bac41a54f2c9595fd1e1efdb78b39a8, disabling compactions & flushes 2023-07-13 15:16:06,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:06,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:06,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. after waiting 0 ms 2023-07-13 15:16:06,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:06,477 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=c3d4810d727b59e7c21e0a7b9d6f54cd, regionState=CLOSED 2023-07-13 15:16:06,478 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261366477"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366477"}]},"ts":"1689261366477"} 2023-07-13 15:16:06,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:06,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532. 2023-07-13 15:16:06,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c142dc7ed03b0397dcb6a04587d3d532: 2023-07-13 15:16:06,492 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:06,493 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=42 2023-07-13 15:16:06,493 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=42, state=SUCCESS; CloseRegionProcedure c3d4810d727b59e7c21e0a7b9d6f54cd, server=jenkins-hbase4.apache.org,32995,1689261357367 in 194 msec 2023-07-13 15:16:06,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:06,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:06,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8. 2023-07-13 15:16:06,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 97e606e1ec92bfb6ab11692abe9896c2, disabling compactions & flushes 2023-07-13 15:16:06,494 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:06,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:06,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. after waiting 0 ms 2023-07-13 15:16:06,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:06,495 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=c142dc7ed03b0397dcb6a04587d3d532, regionState=CLOSED 2023-07-13 15:16:06,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9bac41a54f2c9595fd1e1efdb78b39a8: 2023-07-13 15:16:06,495 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261366495"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366495"}]},"ts":"1689261366495"} 2023-07-13 15:16:06,497 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c3d4810d727b59e7c21e0a7b9d6f54cd, UNASSIGN in 211 msec 2023-07-13 15:16:06,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:06,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:06,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3c925ad775000ce1325a3996abbf89e5, disabling compactions & flushes 2023-07-13 15:16:06,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:06,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:06,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. after waiting 0 ms 2023-07-13 15:16:06,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:06,506 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=9bac41a54f2c9595fd1e1efdb78b39a8, regionState=CLOSED 2023-07-13 15:16:06,506 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366506"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366506"}]},"ts":"1689261366506"} 2023-07-13 15:16:06,507 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=46 2023-07-13 15:16:06,507 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=46, state=SUCCESS; CloseRegionProcedure c142dc7ed03b0397dcb6a04587d3d532, server=jenkins-hbase4.apache.org,34377,1689261361353 in 207 msec 2023-07-13 15:16:06,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:06,511 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2. 2023-07-13 15:16:06,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 97e606e1ec92bfb6ab11692abe9896c2: 2023-07-13 15:16:06,512 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c142dc7ed03b0397dcb6a04587d3d532, UNASSIGN in 225 msec 2023-07-13 15:16:06,514 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:06,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:06,515 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=97e606e1ec92bfb6ab11692abe9896c2, regionState=CLOSED 2023-07-13 15:16:06,515 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5. 2023-07-13 15:16:06,515 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366515"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366515"}]},"ts":"1689261366515"} 2023-07-13 15:16:06,515 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=44 2023-07-13 15:16:06,515 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3c925ad775000ce1325a3996abbf89e5: 2023-07-13 15:16:06,516 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; CloseRegionProcedure 9bac41a54f2c9595fd1e1efdb78b39a8, server=jenkins-hbase4.apache.org,32995,1689261357367 in 220 msec 2023-07-13 15:16:06,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:06,519 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9bac41a54f2c9595fd1e1efdb78b39a8, UNASSIGN in 233 msec 2023-07-13 15:16:06,521 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=3c925ad775000ce1325a3996abbf89e5, regionState=CLOSED 2023-07-13 15:16:06,521 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366521"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366521"}]},"ts":"1689261366521"} 2023-07-13 15:16:06,524 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=45 2023-07-13 15:16:06,525 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=45, state=SUCCESS; CloseRegionProcedure 97e606e1ec92bfb6ab11692abe9896c2, server=jenkins-hbase4.apache.org,34377,1689261361353 in 229 msec 2023-07-13 15:16:06,527 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=43 2023-07-13 15:16:06,528 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=97e606e1ec92bfb6ab11692abe9896c2, UNASSIGN in 243 msec 2023-07-13 15:16:06,528 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=43, state=SUCCESS; CloseRegionProcedure 3c925ad775000ce1325a3996abbf89e5, server=jenkins-hbase4.apache.org,32995,1689261357367 in 232 msec 2023-07-13 15:16:06,530 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=41 2023-07-13 15:16:06,531 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c925ad775000ce1325a3996abbf89e5, UNASSIGN in 246 msec 2023-07-13 15:16:06,532 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261366531"}]},"ts":"1689261366531"} 2023-07-13 15:16:06,533 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-13 15:16:06,536 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-13 15:16:06,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 273 msec 2023-07-13 15:16:06,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-13 15:16:06,582 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-13 15:16:06,583 INFO [Listener at localhost/37749] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:06,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:06,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-13 15:16:06,598 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-13 15:16:06,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 15:16:06,613 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:06,613 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:06,613 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:06,613 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,613 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:06,621 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/recovered.edits] 2023-07-13 15:16:06,621 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/recovered.edits] 2023-07-13 15:16:06,622 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/recovered.edits] 2023-07-13 15:16:06,622 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/recovered.edits] 2023-07-13 15:16:06,622 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/recovered.edits] 2023-07-13 15:16:06,639 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/recovered.edits/7.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5/recovered.edits/7.seqid 2023-07-13 15:16:06,640 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/recovered.edits/7.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532/recovered.edits/7.seqid 2023-07-13 15:16:06,641 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/recovered.edits/7.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8/recovered.edits/7.seqid 2023-07-13 15:16:06,641 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/recovered.edits/7.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2/recovered.edits/7.seqid 2023-07-13 15:16:06,642 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c925ad775000ce1325a3996abbf89e5 2023-07-13 15:16:06,644 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/97e606e1ec92bfb6ab11692abe9896c2 2023-07-13 15:16:06,644 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c142dc7ed03b0397dcb6a04587d3d532 2023-07-13 15:16:06,644 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9bac41a54f2c9595fd1e1efdb78b39a8 2023-07-13 15:16:06,646 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/recovered.edits/7.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd/recovered.edits/7.seqid 2023-07-13 15:16:06,647 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c3d4810d727b59e7c21e0a7b9d6f54cd 2023-07-13 15:16:06,647 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 15:16:06,681 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-13 15:16:06,692 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-13 15:16:06,693 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-13 15:16:06,693 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261366693"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:06,694 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261366693"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:06,694 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261366693"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:06,694 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261366693"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:06,694 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261366693"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:06,697 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-13 15:16:06,697 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c3d4810d727b59e7c21e0a7b9d6f54cd, NAME => 'Group_testTableMoveTruncateAndDrop,,1689261362964.c3d4810d727b59e7c21e0a7b9d6f54cd.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 3c925ad775000ce1325a3996abbf89e5, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689261362964.3c925ad775000ce1325a3996abbf89e5.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 9bac41a54f2c9595fd1e1efdb78b39a8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261362964.9bac41a54f2c9595fd1e1efdb78b39a8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 97e606e1ec92bfb6ab11692abe9896c2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261362964.97e606e1ec92bfb6ab11692abe9896c2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => c142dc7ed03b0397dcb6a04587d3d532, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689261362964.c142dc7ed03b0397dcb6a04587d3d532.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-13 15:16:06,697 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-13 15:16:06,697 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261366697"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:06,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 15:16:06,703 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-13 15:16:06,713 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:06,713 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:06,713 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:06,713 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:06,713 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:06,714 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431 empty. 2023-07-13 15:16:06,714 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819 empty. 2023-07-13 15:16:06,714 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048 empty. 2023-07-13 15:16:06,714 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364 empty. 2023-07-13 15:16:06,714 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:06,714 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:06,715 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c empty. 2023-07-13 15:16:06,715 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:06,715 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:06,715 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:06,715 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 15:16:06,743 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:06,745 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => e85e437d7379d36a5e4d751cbc62d431, NAME => 'Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:06,745 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7179648048e4a95f6677db29f497c13c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:06,745 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => bcc05e0aadf7330e902722eb3326b048, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:06,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:06,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing e85e437d7379d36a5e4d751cbc62d431, disabling compactions & flushes 2023-07-13 15:16:06,792 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:06,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:06,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. after waiting 0 ms 2023-07-13 15:16:06,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:06,792 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:06,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for e85e437d7379d36a5e4d751cbc62d431: 2023-07-13 15:16:06,793 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3a9956140971f7a536cf75dc6d7c1364, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:06,804 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:06,804 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 7179648048e4a95f6677db29f497c13c, disabling compactions & flushes 2023-07-13 15:16:06,804 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:06,804 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:06,804 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. after waiting 0 ms 2023-07-13 15:16:06,804 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:06,804 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:06,804 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 7179648048e4a95f6677db29f497c13c: 2023-07-13 15:16:06,805 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2d128c18fab22ca1cb9b9c04567ab819, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:06,807 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:06,807 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing bcc05e0aadf7330e902722eb3326b048, disabling compactions & flushes 2023-07-13 15:16:06,807 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:06,807 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:06,807 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. after waiting 0 ms 2023-07-13 15:16:06,807 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:06,807 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:06,807 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for bcc05e0aadf7330e902722eb3326b048: 2023-07-13 15:16:06,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:06,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 3a9956140971f7a536cf75dc6d7c1364, disabling compactions & flushes 2023-07-13 15:16:06,821 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:06,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:06,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. after waiting 0 ms 2023-07-13 15:16:06,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:06,821 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:06,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 3a9956140971f7a536cf75dc6d7c1364: 2023-07-13 15:16:06,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:06,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2d128c18fab22ca1cb9b9c04567ab819, disabling compactions & flushes 2023-07-13 15:16:06,822 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:06,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:06,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. after waiting 0 ms 2023-07-13 15:16:06,823 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:06,823 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:06,823 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2d128c18fab22ca1cb9b9c04567ab819: 2023-07-13 15:16:06,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261366826"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366826"}]},"ts":"1689261366826"} 2023-07-13 15:16:06,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366826"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366826"}]},"ts":"1689261366826"} 2023-07-13 15:16:06,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366826"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366826"}]},"ts":"1689261366826"} 2023-07-13 15:16:06,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366826"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366826"}]},"ts":"1689261366826"} 2023-07-13 15:16:06,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261366826"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261366826"}]},"ts":"1689261366826"} 2023-07-13 15:16:06,830 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-13 15:16:06,831 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261366831"}]},"ts":"1689261366831"} 2023-07-13 15:16:06,832 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-13 15:16:06,837 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:06,838 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:06,838 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:06,838 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:06,840 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e85e437d7379d36a5e4d751cbc62d431, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7179648048e4a95f6677db29f497c13c, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bcc05e0aadf7330e902722eb3326b048, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a9956140971f7a536cf75dc6d7c1364, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2d128c18fab22ca1cb9b9c04567ab819, ASSIGN}] 2023-07-13 15:16:06,842 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e85e437d7379d36a5e4d751cbc62d431, ASSIGN 2023-07-13 15:16:06,842 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7179648048e4a95f6677db29f497c13c, ASSIGN 2023-07-13 15:16:06,842 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bcc05e0aadf7330e902722eb3326b048, ASSIGN 2023-07-13 15:16:06,842 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a9956140971f7a536cf75dc6d7c1364, ASSIGN 2023-07-13 15:16:06,842 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2d128c18fab22ca1cb9b9c04567ab819, ASSIGN 2023-07-13 15:16:06,843 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e85e437d7379d36a5e4d751cbc62d431, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34377,1689261361353; forceNewPlan=false, retain=false 2023-07-13 15:16:06,843 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bcc05e0aadf7330e902722eb3326b048, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34377,1689261361353; forceNewPlan=false, retain=false 2023-07-13 15:16:06,843 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a9956140971f7a536cf75dc6d7c1364, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32995,1689261357367; forceNewPlan=false, retain=false 2023-07-13 15:16:06,843 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7179648048e4a95f6677db29f497c13c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32995,1689261357367; forceNewPlan=false, retain=false 2023-07-13 15:16:06,844 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2d128c18fab22ca1cb9b9c04567ab819, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34377,1689261361353; forceNewPlan=false, retain=false 2023-07-13 15:16:06,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 15:16:06,993 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 15:16:06,996 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=2d128c18fab22ca1cb9b9c04567ab819, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:06,996 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=bcc05e0aadf7330e902722eb3326b048, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:06,997 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261366996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366996"}]},"ts":"1689261366996"} 2023-07-13 15:16:06,996 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=e85e437d7379d36a5e4d751cbc62d431, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:06,997 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366996"}]},"ts":"1689261366996"} 2023-07-13 15:16:06,997 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261366996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366996"}]},"ts":"1689261366996"} 2023-07-13 15:16:06,996 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=3a9956140971f7a536cf75dc6d7c1364, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:06,996 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=7179648048e4a95f6677db29f497c13c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:06,997 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366996"}]},"ts":"1689261366996"} 2023-07-13 15:16:06,997 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261366996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261366996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261366996"}]},"ts":"1689261366996"} 2023-07-13 15:16:06,999 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=57, state=RUNNABLE; OpenRegionProcedure 2d128c18fab22ca1cb9b9c04567ab819, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:07,001 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=53, state=RUNNABLE; OpenRegionProcedure e85e437d7379d36a5e4d751cbc62d431, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:07,002 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=55, state=RUNNABLE; OpenRegionProcedure bcc05e0aadf7330e902722eb3326b048, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:07,003 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=56, state=RUNNABLE; OpenRegionProcedure 3a9956140971f7a536cf75dc6d7c1364, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:07,011 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=54, state=RUNNABLE; OpenRegionProcedure 7179648048e4a95f6677db29f497c13c, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:07,157 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:07,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e85e437d7379d36a5e4d751cbc62d431, NAME => 'Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 15:16:07,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:07,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:07,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:07,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:07,159 INFO [StoreOpener-e85e437d7379d36a5e4d751cbc62d431-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:07,161 DEBUG [StoreOpener-e85e437d7379d36a5e4d751cbc62d431-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431/f 2023-07-13 15:16:07,161 DEBUG [StoreOpener-e85e437d7379d36a5e4d751cbc62d431-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431/f 2023-07-13 15:16:07,161 INFO [StoreOpener-e85e437d7379d36a5e4d751cbc62d431-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e85e437d7379d36a5e4d751cbc62d431 columnFamilyName f 2023-07-13 15:16:07,162 INFO [StoreOpener-e85e437d7379d36a5e4d751cbc62d431-1] regionserver.HStore(310): Store=e85e437d7379d36a5e4d751cbc62d431/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:07,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:07,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:07,167 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:07,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3a9956140971f7a536cf75dc6d7c1364, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 15:16:07,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:07,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:07,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:07,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:07,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:07,170 INFO [StoreOpener-3a9956140971f7a536cf75dc6d7c1364-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:07,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:07,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e85e437d7379d36a5e4d751cbc62d431; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9717064800, jitterRate=-0.09502781927585602}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:07,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e85e437d7379d36a5e4d751cbc62d431: 2023-07-13 15:16:07,173 DEBUG [StoreOpener-3a9956140971f7a536cf75dc6d7c1364-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364/f 2023-07-13 15:16:07,173 DEBUG [StoreOpener-3a9956140971f7a536cf75dc6d7c1364-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364/f 2023-07-13 15:16:07,174 INFO [StoreOpener-3a9956140971f7a536cf75dc6d7c1364-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3a9956140971f7a536cf75dc6d7c1364 columnFamilyName f 2023-07-13 15:16:07,174 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431., pid=59, masterSystemTime=1689261367152 2023-07-13 15:16:07,175 INFO [StoreOpener-3a9956140971f7a536cf75dc6d7c1364-1] regionserver.HStore(310): Store=3a9956140971f7a536cf75dc6d7c1364/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:07,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:07,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:07,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:07,179 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:07,179 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2d128c18fab22ca1cb9b9c04567ab819, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 15:16:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:07,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:07,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:07,181 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=e85e437d7379d36a5e4d751cbc62d431, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:07,182 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261367181"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261367181"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261367181"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261367181"}]},"ts":"1689261367181"} 2023-07-13 15:16:07,183 INFO [StoreOpener-2d128c18fab22ca1cb9b9c04567ab819-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:07,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:07,185 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3a9956140971f7a536cf75dc6d7c1364; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12014556480, jitterRate=0.11894276738166809}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:07,185 DEBUG [StoreOpener-2d128c18fab22ca1cb9b9c04567ab819-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819/f 2023-07-13 15:16:07,185 DEBUG [StoreOpener-2d128c18fab22ca1cb9b9c04567ab819-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819/f 2023-07-13 15:16:07,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3a9956140971f7a536cf75dc6d7c1364: 2023-07-13 15:16:07,186 INFO [StoreOpener-2d128c18fab22ca1cb9b9c04567ab819-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d128c18fab22ca1cb9b9c04567ab819 columnFamilyName f 2023-07-13 15:16:07,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364., pid=61, masterSystemTime=1689261367163 2023-07-13 15:16:07,187 INFO [StoreOpener-2d128c18fab22ca1cb9b9c04567ab819-1] regionserver.HStore(310): Store=2d128c18fab22ca1cb9b9c04567ab819/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:07,187 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=53 2023-07-13 15:16:07,187 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=53, state=SUCCESS; OpenRegionProcedure e85e437d7379d36a5e4d751cbc62d431, server=jenkins-hbase4.apache.org,34377,1689261361353 in 184 msec 2023-07-13 15:16:07,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:07,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:07,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:07,189 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e85e437d7379d36a5e4d751cbc62d431, ASSIGN in 349 msec 2023-07-13 15:16:07,189 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:07,189 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:07,189 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=3a9956140971f7a536cf75dc6d7c1364, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:07,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7179648048e4a95f6677db29f497c13c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 15:16:07,189 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261367189"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261367189"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261367189"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261367189"}]},"ts":"1689261367189"} 2023-07-13 15:16:07,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:07,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:07,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:07,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:07,192 INFO [StoreOpener-7179648048e4a95f6677db29f497c13c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:07,194 DEBUG [StoreOpener-7179648048e4a95f6677db29f497c13c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c/f 2023-07-13 15:16:07,194 DEBUG [StoreOpener-7179648048e4a95f6677db29f497c13c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c/f 2023-07-13 15:16:07,194 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=56 2023-07-13 15:16:07,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:07,194 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=56, state=SUCCESS; OpenRegionProcedure 3a9956140971f7a536cf75dc6d7c1364, server=jenkins-hbase4.apache.org,32995,1689261357367 in 189 msec 2023-07-13 15:16:07,194 INFO [StoreOpener-7179648048e4a95f6677db29f497c13c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7179648048e4a95f6677db29f497c13c columnFamilyName f 2023-07-13 15:16:07,195 INFO [StoreOpener-7179648048e4a95f6677db29f497c13c-1] regionserver.HStore(310): Store=7179648048e4a95f6677db29f497c13c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:07,196 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a9956140971f7a536cf75dc6d7c1364, ASSIGN in 354 msec 2023-07-13 15:16:07,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:07,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:07,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2d128c18fab22ca1cb9b9c04567ab819; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9554422400, jitterRate=-0.11017507314682007}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:07,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2d128c18fab22ca1cb9b9c04567ab819: 2023-07-13 15:16:07,199 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819., pid=58, masterSystemTime=1689261367152 2023-07-13 15:16:07,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:07,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:07,203 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:07,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:07,204 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=2d128c18fab22ca1cb9b9c04567ab819, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:07,205 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261367204"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261367204"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261367204"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261367204"}]},"ts":"1689261367204"} 2023-07-13 15:16:07,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 15:16:07,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bcc05e0aadf7330e902722eb3326b048, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 15:16:07,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:07,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:07,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:07,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:07,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:07,210 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=57 2023-07-13 15:16:07,210 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=57, state=SUCCESS; OpenRegionProcedure 2d128c18fab22ca1cb9b9c04567ab819, server=jenkins-hbase4.apache.org,34377,1689261361353 in 208 msec 2023-07-13 15:16:07,211 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2d128c18fab22ca1cb9b9c04567ab819, ASSIGN in 370 msec 2023-07-13 15:16:07,212 INFO [StoreOpener-bcc05e0aadf7330e902722eb3326b048-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:07,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:07,214 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7179648048e4a95f6677db29f497c13c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10885752000, jitterRate=0.013814657926559448}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:07,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7179648048e4a95f6677db29f497c13c: 2023-07-13 15:16:07,214 DEBUG [StoreOpener-bcc05e0aadf7330e902722eb3326b048-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048/f 2023-07-13 15:16:07,214 DEBUG [StoreOpener-bcc05e0aadf7330e902722eb3326b048-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048/f 2023-07-13 15:16:07,215 INFO [StoreOpener-bcc05e0aadf7330e902722eb3326b048-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bcc05e0aadf7330e902722eb3326b048 columnFamilyName f 2023-07-13 15:16:07,215 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c., pid=62, masterSystemTime=1689261367163 2023-07-13 15:16:07,215 INFO [StoreOpener-bcc05e0aadf7330e902722eb3326b048-1] regionserver.HStore(310): Store=bcc05e0aadf7330e902722eb3326b048/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:07,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:07,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:07,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:07,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:07,220 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=7179648048e4a95f6677db29f497c13c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:07,220 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261367220"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261367220"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261367220"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261367220"}]},"ts":"1689261367220"} 2023-07-13 15:16:07,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:07,227 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=54 2023-07-13 15:16:07,228 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=54, state=SUCCESS; OpenRegionProcedure 7179648048e4a95f6677db29f497c13c, server=jenkins-hbase4.apache.org,32995,1689261357367 in 211 msec 2023-07-13 15:16:07,230 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7179648048e4a95f6677db29f497c13c, ASSIGN in 388 msec 2023-07-13 15:16:07,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:07,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bcc05e0aadf7330e902722eb3326b048; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9874075360, jitterRate=-0.08040507137775421}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:07,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bcc05e0aadf7330e902722eb3326b048: 2023-07-13 15:16:07,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048., pid=60, masterSystemTime=1689261367152 2023-07-13 15:16:07,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:07,240 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:07,240 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=bcc05e0aadf7330e902722eb3326b048, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:07,241 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261367240"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261367240"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261367240"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261367240"}]},"ts":"1689261367240"} 2023-07-13 15:16:07,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=55 2023-07-13 15:16:07,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; OpenRegionProcedure bcc05e0aadf7330e902722eb3326b048, server=jenkins-hbase4.apache.org,34377,1689261361353 in 241 msec 2023-07-13 15:16:07,247 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=52 2023-07-13 15:16:07,247 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bcc05e0aadf7330e902722eb3326b048, ASSIGN in 405 msec 2023-07-13 15:16:07,248 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261367247"}]},"ts":"1689261367247"} 2023-07-13 15:16:07,250 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-13 15:16:07,252 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-13 15:16:07,255 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 664 msec 2023-07-13 15:16:07,656 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-13 15:16:07,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-13 15:16:07,707 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-13 15:16:07,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:07,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:07,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:07,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:07,711 INFO [Listener at localhost/37749] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:07,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:07,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:07,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-13 15:16:07,716 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261367715"}]},"ts":"1689261367715"} 2023-07-13 15:16:07,717 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-13 15:16:07,719 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-13 15:16:07,720 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e85e437d7379d36a5e4d751cbc62d431, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7179648048e4a95f6677db29f497c13c, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bcc05e0aadf7330e902722eb3326b048, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a9956140971f7a536cf75dc6d7c1364, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2d128c18fab22ca1cb9b9c04567ab819, UNASSIGN}] 2023-07-13 15:16:07,722 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e85e437d7379d36a5e4d751cbc62d431, UNASSIGN 2023-07-13 15:16:07,722 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2d128c18fab22ca1cb9b9c04567ab819, UNASSIGN 2023-07-13 15:16:07,722 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7179648048e4a95f6677db29f497c13c, UNASSIGN 2023-07-13 15:16:07,723 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bcc05e0aadf7330e902722eb3326b048, UNASSIGN 2023-07-13 15:16:07,723 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a9956140971f7a536cf75dc6d7c1364, UNASSIGN 2023-07-13 15:16:07,723 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=e85e437d7379d36a5e4d751cbc62d431, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:07,723 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=2d128c18fab22ca1cb9b9c04567ab819, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:07,724 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261367723"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261367723"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261367723"}]},"ts":"1689261367723"} 2023-07-13 15:16:07,724 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261367723"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261367723"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261367723"}]},"ts":"1689261367723"} 2023-07-13 15:16:07,724 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=bcc05e0aadf7330e902722eb3326b048, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:07,724 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=7179648048e4a95f6677db29f497c13c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:07,724 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261367724"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261367724"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261367724"}]},"ts":"1689261367724"} 2023-07-13 15:16:07,724 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261367724"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261367724"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261367724"}]},"ts":"1689261367724"} 2023-07-13 15:16:07,725 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=3a9956140971f7a536cf75dc6d7c1364, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:07,725 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261367724"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261367724"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261367724"}]},"ts":"1689261367724"} 2023-07-13 15:16:07,726 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=64, state=RUNNABLE; CloseRegionProcedure e85e437d7379d36a5e4d751cbc62d431, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:07,727 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=68, state=RUNNABLE; CloseRegionProcedure 2d128c18fab22ca1cb9b9c04567ab819, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:07,728 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=66, state=RUNNABLE; CloseRegionProcedure bcc05e0aadf7330e902722eb3326b048, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:07,729 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=65, state=RUNNABLE; CloseRegionProcedure 7179648048e4a95f6677db29f497c13c, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:07,730 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=67, state=RUNNABLE; CloseRegionProcedure 3a9956140971f7a536cf75dc6d7c1364, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:07,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-13 15:16:07,879 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:07,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2d128c18fab22ca1cb9b9c04567ab819, disabling compactions & flushes 2023-07-13 15:16:07,880 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:07,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:07,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. after waiting 0 ms 2023-07-13 15:16:07,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:07,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:07,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3a9956140971f7a536cf75dc6d7c1364, disabling compactions & flushes 2023-07-13 15:16:07,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:07,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:07,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. after waiting 0 ms 2023-07-13 15:16:07,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:07,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:07,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819. 2023-07-13 15:16:07,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2d128c18fab22ca1cb9b9c04567ab819: 2023-07-13 15:16:07,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:07,890 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:07,890 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:07,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e85e437d7379d36a5e4d751cbc62d431, disabling compactions & flushes 2023-07-13 15:16:07,891 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:07,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:07,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. after waiting 0 ms 2023-07-13 15:16:07,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:07,891 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=2d128c18fab22ca1cb9b9c04567ab819, regionState=CLOSED 2023-07-13 15:16:07,891 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261367891"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261367891"}]},"ts":"1689261367891"} 2023-07-13 15:16:07,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364. 2023-07-13 15:16:07,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3a9956140971f7a536cf75dc6d7c1364: 2023-07-13 15:16:07,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:07,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:07,898 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=68 2023-07-13 15:16:07,898 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=68, state=SUCCESS; CloseRegionProcedure 2d128c18fab22ca1cb9b9c04567ab819, server=jenkins-hbase4.apache.org,34377,1689261361353 in 167 msec 2023-07-13 15:16:07,898 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=3a9956140971f7a536cf75dc6d7c1364, regionState=CLOSED 2023-07-13 15:16:07,899 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261367898"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261367898"}]},"ts":"1689261367898"} 2023-07-13 15:16:07,900 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2d128c18fab22ca1cb9b9c04567ab819, UNASSIGN in 178 msec 2023-07-13 15:16:07,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7179648048e4a95f6677db29f497c13c, disabling compactions & flushes 2023-07-13 15:16:07,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:07,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:07,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. after waiting 0 ms 2023-07-13 15:16:07,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:07,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:07,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431. 2023-07-13 15:16:07,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e85e437d7379d36a5e4d751cbc62d431: 2023-07-13 15:16:07,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=67 2023-07-13 15:16:07,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=67, state=SUCCESS; CloseRegionProcedure 3a9956140971f7a536cf75dc6d7c1364, server=jenkins-hbase4.apache.org,32995,1689261357367 in 173 msec 2023-07-13 15:16:07,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:07,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:07,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bcc05e0aadf7330e902722eb3326b048, disabling compactions & flushes 2023-07-13 15:16:07,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:07,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:07,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. after waiting 0 ms 2023-07-13 15:16:07,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:07,909 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=e85e437d7379d36a5e4d751cbc62d431, regionState=CLOSED 2023-07-13 15:16:07,910 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689261367909"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261367909"}]},"ts":"1689261367909"} 2023-07-13 15:16:07,909 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a9956140971f7a536cf75dc6d7c1364, UNASSIGN in 186 msec 2023-07-13 15:16:07,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:07,913 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c. 2023-07-13 15:16:07,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7179648048e4a95f6677db29f497c13c: 2023-07-13 15:16:07,915 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=64 2023-07-13 15:16:07,915 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=64, state=SUCCESS; CloseRegionProcedure e85e437d7379d36a5e4d751cbc62d431, server=jenkins-hbase4.apache.org,34377,1689261361353 in 186 msec 2023-07-13 15:16:07,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:07,916 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=7179648048e4a95f6677db29f497c13c, regionState=CLOSED 2023-07-13 15:16:07,916 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261367916"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261367916"}]},"ts":"1689261367916"} 2023-07-13 15:16:07,917 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e85e437d7379d36a5e4d751cbc62d431, UNASSIGN in 195 msec 2023-07-13 15:16:07,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:07,918 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048. 2023-07-13 15:16:07,918 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bcc05e0aadf7330e902722eb3326b048: 2023-07-13 15:16:07,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:07,921 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=bcc05e0aadf7330e902722eb3326b048, regionState=CLOSED 2023-07-13 15:16:07,922 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689261367921"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261367921"}]},"ts":"1689261367921"} 2023-07-13 15:16:07,922 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=65 2023-07-13 15:16:07,922 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=65, state=SUCCESS; CloseRegionProcedure 7179648048e4a95f6677db29f497c13c, server=jenkins-hbase4.apache.org,32995,1689261357367 in 189 msec 2023-07-13 15:16:07,924 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7179648048e4a95f6677db29f497c13c, UNASSIGN in 202 msec 2023-07-13 15:16:07,927 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=66 2023-07-13 15:16:07,928 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; CloseRegionProcedure bcc05e0aadf7330e902722eb3326b048, server=jenkins-hbase4.apache.org,34377,1689261361353 in 196 msec 2023-07-13 15:16:07,930 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=63 2023-07-13 15:16:07,930 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bcc05e0aadf7330e902722eb3326b048, UNASSIGN in 208 msec 2023-07-13 15:16:07,930 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261367930"}]},"ts":"1689261367930"} 2023-07-13 15:16:07,932 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-13 15:16:07,935 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-13 15:16:07,937 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 224 msec 2023-07-13 15:16:08,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-13 15:16:08,018 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-13 15:16:08,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:08,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:08,036 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:08,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_44995158' 2023-07-13 15:16:08,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,040 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:08,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:08,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:08,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-13 15:16:08,060 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:08,060 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:08,061 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:08,061 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:08,061 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:08,064 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819/recovered.edits] 2023-07-13 15:16:08,064 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364/recovered.edits] 2023-07-13 15:16:08,065 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431/recovered.edits] 2023-07-13 15:16:08,066 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c/recovered.edits] 2023-07-13 15:16:08,069 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048/recovered.edits] 2023-07-13 15:16:08,081 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364/recovered.edits/4.seqid 2023-07-13 15:16:08,083 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a9956140971f7a536cf75dc6d7c1364 2023-07-13 15:16:08,083 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819/recovered.edits/4.seqid 2023-07-13 15:16:08,084 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2d128c18fab22ca1cb9b9c04567ab819 2023-07-13 15:16:08,086 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048/recovered.edits/4.seqid 2023-07-13 15:16:08,086 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431/recovered.edits/4.seqid 2023-07-13 15:16:08,087 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c/recovered.edits/4.seqid 2023-07-13 15:16:08,087 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bcc05e0aadf7330e902722eb3326b048 2023-07-13 15:16:08,087 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e85e437d7379d36a5e4d751cbc62d431 2023-07-13 15:16:08,087 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7179648048e4a95f6677db29f497c13c 2023-07-13 15:16:08,088 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 15:16:08,091 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:08,098 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-13 15:16:08,101 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-13 15:16:08,102 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:08,102 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-13 15:16:08,103 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261368103"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:08,103 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261368103"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:08,103 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261368103"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:08,103 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261368103"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:08,103 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261368103"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:08,105 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-13 15:16:08,105 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e85e437d7379d36a5e4d751cbc62d431, NAME => 'Group_testTableMoveTruncateAndDrop,,1689261366649.e85e437d7379d36a5e4d751cbc62d431.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 7179648048e4a95f6677db29f497c13c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689261366649.7179648048e4a95f6677db29f497c13c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => bcc05e0aadf7330e902722eb3326b048, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689261366649.bcc05e0aadf7330e902722eb3326b048.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 3a9956140971f7a536cf75dc6d7c1364, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689261366649.3a9956140971f7a536cf75dc6d7c1364.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 2d128c18fab22ca1cb9b9c04567ab819, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689261366649.2d128c18fab22ca1cb9b9c04567ab819.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-13 15:16:08,105 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-13 15:16:08,106 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261368105"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:08,107 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-13 15:16:08,110 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 15:16:08,112 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 84 msec 2023-07-13 15:16:08,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-13 15:16:08,158 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-13 15:16:08,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:08,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:08,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:08,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:08,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:08,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377] to rsgroup default 2023-07-13 15:16:08,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:08,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:08,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_44995158, current retry=0 2023-07-13 15:16:08,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353] are moved back to Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:08,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_44995158 => default 2023-07-13 15:16:08,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:08,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_44995158 2023-07-13 15:16:08,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:08,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:08,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:08,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:08,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:08,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:08,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:08,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:08,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:08,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:08,207 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:08,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:08,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:08,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:08,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:08,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:08,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262568223, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:08,224 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:08,226 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:08,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,228 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:08,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:08,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:08,256 INFO [Listener at localhost/37749] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=498 (was 424) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1514897013-172.31.14.131-1689261351889:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1623508870_17 at /127.0.0.1:40364 [Receiving block BP-1514897013-172.31.14.131-1689261351889:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1514897013-172.31.14.131-1689261351889:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536-prefix:jenkins-hbase4.apache.org,34377,1689261361353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:34377Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34377 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-638 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:37375 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1623508870_17 at /127.0.0.1:48244 [Receiving block BP-1514897013-172.31.14.131-1689261351889:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1573316261_17 at /127.0.0.1:40386 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34377-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1918969580-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1623508870_17 at /127.0.0.1:40750 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-640-acceptor-0@604c0bf5-ServerConnector@53265acb{HTTP/1.1, (http/1.1)}{0.0.0.0:44651} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1514897013-172.31.14.131-1689261351889:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52275@0x26d6c895 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/55670216.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1918969580-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:37375 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52275@0x26d6c895-SendThread(127.0.0.1:52275) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1623508870_17 at /127.0.0.1:40736 [Receiving block BP-1514897013-172.31.14.131-1689261351889:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-519b7a7c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52275@0x26d6c895-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1573316261_17 at /127.0.0.1:48372 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=789 (was 678) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=534 (was 484) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=4830 (was 5348) 2023-07-13 15:16:08,274 INFO [Listener at localhost/37749] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=498, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=534, ProcessCount=172, AvailableMemoryMB=4829 2023-07-13 15:16:08,275 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-13 15:16:08,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:08,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:08,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:08,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:08,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:08,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:08,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:08,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:08,300 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:08,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:08,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:08,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:08,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:08,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:08,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262568314, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:08,315 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:08,316 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:08,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,318 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:08,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:08,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:08,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-13 15:16:08,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:08,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:50614 deadline: 1689262568320, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-13 15:16:08,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-13 15:16:08,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:08,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:50614 deadline: 1689262568321, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-13 15:16:08,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-13 15:16:08,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:08,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:50614 deadline: 1689262568323, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-13 15:16:08,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-13 15:16:08,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-13 15:16:08,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:08,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:08,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:08,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:08,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:08,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:08,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:08,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-13 15:16:08,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:08,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:08,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:08,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:08,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:08,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:08,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:08,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:08,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:08,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:08,370 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:08,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:08,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:08,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:08,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:08,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:08,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262568393, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:08,393 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:08,395 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:08,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,396 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:08,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:08,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:08,415 INFO [Listener at localhost/37749] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=501 (was 498) Potentially hanging thread: hconnection-0x120ad869-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=789 (was 789), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=534 (was 534), ProcessCount=172 (was 172), AvailableMemoryMB=4839 (was 4829) - AvailableMemoryMB LEAK? - 2023-07-13 15:16:08,416 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-13 15:16:08,436 INFO [Listener at localhost/37749] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=501, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=534, ProcessCount=172, AvailableMemoryMB=4840 2023-07-13 15:16:08,437 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-13 15:16:08,437 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-13 15:16:08,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:08,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:08,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:08,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:08,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:08,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:08,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:08,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:08,459 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:08,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:08,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:08,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:08,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:08,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:08,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262568482, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:08,483 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:08,485 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:08,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,487 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:08,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:08,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:08,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:08,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:08,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-13 15:16:08,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 15:16:08,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:08,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:08,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:08,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:08,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:34377] to rsgroup bar 2023-07-13 15:16:08,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:08,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 15:16:08,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:08,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:08,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(238): Moving server region 1c39d35808badfb6a5d66d7a6a08f142, which do not belong to RSGroup bar 2023-07-13 15:16:08,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=1c39d35808badfb6a5d66d7a6a08f142, REOPEN/MOVE 2023-07-13 15:16:08,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-13 15:16:08,513 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=1c39d35808badfb6a5d66d7a6a08f142, REOPEN/MOVE 2023-07-13 15:16:08,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 15:16:08,515 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=1c39d35808badfb6a5d66d7a6a08f142, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:08,516 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 15:16:08,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-13 15:16:08,516 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261368515"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261368515"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261368515"}]},"ts":"1689261368515"} 2023-07-13 15:16:08,517 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40971,1689261357748, state=CLOSING 2023-07-13 15:16:08,518 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 1c39d35808badfb6a5d66d7a6a08f142, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:08,519 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:08,520 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=76, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:08,520 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:08,520 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 1c39d35808badfb6a5d66d7a6a08f142, server=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:08,672 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-13 15:16:08,674 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:08,674 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:08,674 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:08,674 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:08,674 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:08,674 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=41.95 KB heapSize=64.95 KB 2023-07-13 15:16:08,695 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/info/0a8bf8ec043f44eba383d623c0af6299 2023-07-13 15:16:08,702 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0a8bf8ec043f44eba383d623c0af6299 2023-07-13 15:16:08,751 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/rep_barrier/1d682e0587da48068431241dc4e23ea2 2023-07-13 15:16:08,761 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1d682e0587da48068431241dc4e23ea2 2023-07-13 15:16:08,787 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/table/b6d210097f134901a38ddefd3fd393ff 2023-07-13 15:16:08,795 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b6d210097f134901a38ddefd3fd393ff 2023-07-13 15:16:08,796 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/info/0a8bf8ec043f44eba383d623c0af6299 as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info/0a8bf8ec043f44eba383d623c0af6299 2023-07-13 15:16:08,806 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0a8bf8ec043f44eba383d623c0af6299 2023-07-13 15:16:08,806 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info/0a8bf8ec043f44eba383d623c0af6299, entries=46, sequenceid=95, filesize=10.2 K 2023-07-13 15:16:08,810 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/rep_barrier/1d682e0587da48068431241dc4e23ea2 as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier/1d682e0587da48068431241dc4e23ea2 2023-07-13 15:16:08,818 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1d682e0587da48068431241dc4e23ea2 2023-07-13 15:16:08,818 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier/1d682e0587da48068431241dc4e23ea2, entries=10, sequenceid=95, filesize=6.1 K 2023-07-13 15:16:08,819 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/table/b6d210097f134901a38ddefd3fd393ff as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table/b6d210097f134901a38ddefd3fd393ff 2023-07-13 15:16:08,827 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b6d210097f134901a38ddefd3fd393ff 2023-07-13 15:16:08,828 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table/b6d210097f134901a38ddefd3fd393ff, entries=15, sequenceid=95, filesize=6.2 K 2023-07-13 15:16:08,829 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~41.95 KB/42961, heapSize ~64.91 KB/66464, currentSize=0 B/0 for 1588230740 in 155ms, sequenceid=95, compaction requested=false 2023-07-13 15:16:08,848 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-13 15:16:08,849 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:08,850 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:08,850 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:08,850 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44089,1689261357555 record at close sequenceid=95 2023-07-13 15:16:08,852 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-13 15:16:08,853 WARN [PEWorker-3] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-13 15:16:08,855 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=76 2023-07-13 15:16:08,855 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=76, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40971,1689261357748 in 334 msec 2023-07-13 15:16:08,856 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:09,007 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44089,1689261357555, state=OPENING 2023-07-13 15:16:09,008 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:09,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=76, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:09,011 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:09,168 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 15:16:09,168 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:09,170 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44089%2C1689261357555.meta, suffix=.meta, logDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,44089,1689261357555, archiveDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs, maxLogs=32 2023-07-13 15:16:09,195 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK] 2023-07-13 15:16:09,196 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK] 2023-07-13 15:16:09,197 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK] 2023-07-13 15:16:09,200 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/WALs/jenkins-hbase4.apache.org,44089,1689261357555/jenkins-hbase4.apache.org%2C44089%2C1689261357555.meta.1689261369171.meta 2023-07-13 15:16:09,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33525,DS-714f3de1-2f7f-4438-96c5-f1f766536cbb,DISK], DatanodeInfoWithStorage[127.0.0.1:36081,DS-7937480f-287a-496c-8e6d-49e1ae6250f9,DISK], DatanodeInfoWithStorage[127.0.0.1:44071,DS-ec272a69-f8e9-4a22-bc93-b60166fb9a9c,DISK]] 2023-07-13 15:16:09,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:09,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:09,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 15:16:09,201 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 15:16:09,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 15:16:09,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:09,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 15:16:09,202 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 15:16:09,203 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:09,204 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info 2023-07-13 15:16:09,205 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info 2023-07-13 15:16:09,205 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:09,215 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0a8bf8ec043f44eba383d623c0af6299 2023-07-13 15:16:09,215 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info/0a8bf8ec043f44eba383d623c0af6299 2023-07-13 15:16:09,216 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:09,216 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:09,217 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:09,217 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:09,217 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:09,227 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1d682e0587da48068431241dc4e23ea2 2023-07-13 15:16:09,227 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier/1d682e0587da48068431241dc4e23ea2 2023-07-13 15:16:09,227 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:09,228 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:09,229 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table 2023-07-13 15:16:09,229 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table 2023-07-13 15:16:09,230 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:09,242 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b6d210097f134901a38ddefd3fd393ff 2023-07-13 15:16:09,242 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table/b6d210097f134901a38ddefd3fd393ff 2023-07-13 15:16:09,243 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:09,243 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740 2023-07-13 15:16:09,246 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740 2023-07-13 15:16:09,249 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:09,250 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:09,252 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=99; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9579455200, jitterRate=-0.10784371197223663}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:09,252 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:09,255 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=79, masterSystemTime=1689261369163 2023-07-13 15:16:09,257 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 15:16:09,257 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 15:16:09,257 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44089,1689261357555, state=OPEN 2023-07-13 15:16:09,259 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:09,259 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:09,261 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=76 2023-07-13 15:16:09,261 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=76, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44089,1689261357555 in 251 msec 2023-07-13 15:16:09,263 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 749 msec 2023-07-13 15:16:09,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:09,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1c39d35808badfb6a5d66d7a6a08f142, disabling compactions & flushes 2023-07-13 15:16:09,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:09,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:09,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. after waiting 0 ms 2023-07-13 15:16:09,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:09,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1c39d35808badfb6a5d66d7a6a08f142 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-13 15:16:09,443 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/.tmp/info/169ea3c3650748eb92da43fcc60abad7 2023-07-13 15:16:09,459 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/.tmp/info/169ea3c3650748eb92da43fcc60abad7 as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/info/169ea3c3650748eb92da43fcc60abad7 2023-07-13 15:16:09,471 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/info/169ea3c3650748eb92da43fcc60abad7, entries=2, sequenceid=6, filesize=4.8 K 2023-07-13 15:16:09,472 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 1c39d35808badfb6a5d66d7a6a08f142 in 59ms, sequenceid=6, compaction requested=false 2023-07-13 15:16:09,516 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-13 15:16:09,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-13 15:16:09,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:09,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1c39d35808badfb6a5d66d7a6a08f142: 2023-07-13 15:16:09,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1c39d35808badfb6a5d66d7a6a08f142 move to jenkins-hbase4.apache.org,44089,1689261357555 record at close sequenceid=6 2023-07-13 15:16:09,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:09,526 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=1c39d35808badfb6a5d66d7a6a08f142, regionState=CLOSED 2023-07-13 15:16:09,527 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261369526"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261369526"}]},"ts":"1689261369526"} 2023-07-13 15:16:09,527 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40971] ipc.CallRunner(144): callId: 186 service: ClientService methodName: Mutate size: 218 connection: 172.31.14.131:35616 deadline: 1689261429527, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44089 startCode=1689261357555. As of locationSeqNum=95. 2023-07-13 15:16:09,633 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-13 15:16:09,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; CloseRegionProcedure 1c39d35808badfb6a5d66d7a6a08f142, server=jenkins-hbase4.apache.org,40971,1689261357748 in 1.1130 sec 2023-07-13 15:16:09,637 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1c39d35808badfb6a5d66d7a6a08f142, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:09,788 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=1c39d35808badfb6a5d66d7a6a08f142, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:09,788 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261369788"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261369788"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261369788"}]},"ts":"1689261369788"} 2023-07-13 15:16:09,795 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=75, state=RUNNABLE; OpenRegionProcedure 1c39d35808badfb6a5d66d7a6a08f142, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:09,952 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:09,952 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1c39d35808badfb6a5d66d7a6a08f142, NAME => 'hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:09,953 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:09,953 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:09,953 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:09,953 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:09,959 INFO [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:09,960 DEBUG [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/info 2023-07-13 15:16:09,960 DEBUG [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/info 2023-07-13 15:16:09,961 INFO [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1c39d35808badfb6a5d66d7a6a08f142 columnFamilyName info 2023-07-13 15:16:09,969 DEBUG [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] regionserver.HStore(539): loaded hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/info/169ea3c3650748eb92da43fcc60abad7 2023-07-13 15:16:09,969 INFO [StoreOpener-1c39d35808badfb6a5d66d7a6a08f142-1] regionserver.HStore(310): Store=1c39d35808badfb6a5d66d7a6a08f142/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:09,970 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:09,972 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:09,980 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:09,981 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1c39d35808badfb6a5d66d7a6a08f142; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10514430400, jitterRate=-0.02076736092567444}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:09,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1c39d35808badfb6a5d66d7a6a08f142: 2023-07-13 15:16:09,982 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142., pid=80, masterSystemTime=1689261369947 2023-07-13 15:16:09,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:09,987 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:09,991 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=1c39d35808badfb6a5d66d7a6a08f142, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:09,991 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261369990"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261369990"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261369990"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261369990"}]},"ts":"1689261369990"} 2023-07-13 15:16:09,995 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=75 2023-07-13 15:16:09,995 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=75, state=SUCCESS; OpenRegionProcedure 1c39d35808badfb6a5d66d7a6a08f142, server=jenkins-hbase4.apache.org,44089,1689261357555 in 198 msec 2023-07-13 15:16:09,996 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1c39d35808badfb6a5d66d7a6a08f142, REOPEN/MOVE in 1.4840 sec 2023-07-13 15:16:10,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353, jenkins-hbase4.apache.org,40971,1689261357748] are moved back to default 2023-07-13 15:16:10,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-13 15:16:10,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:10,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:10,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:10,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-13 15:16:10,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:10,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:10,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-13 15:16:10,531 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:10,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-13 15:16:10,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 15:16:10,534 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:10,535 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 15:16:10,535 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:10,535 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:10,538 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:10,539 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:10,540 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 empty. 2023-07-13 15:16:10,541 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:10,541 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-13 15:16:10,558 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:10,560 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 13b3c09b6477682865b23aa8d30465e9, NAME => 'Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:10,576 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:10,577 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 13b3c09b6477682865b23aa8d30465e9, disabling compactions & flushes 2023-07-13 15:16:10,577 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:10,577 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:10,577 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. after waiting 0 ms 2023-07-13 15:16:10,577 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:10,577 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:10,577 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 13b3c09b6477682865b23aa8d30465e9: 2023-07-13 15:16:10,580 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:10,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261370581"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261370581"}]},"ts":"1689261370581"} 2023-07-13 15:16:10,583 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:10,585 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:10,585 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261370585"}]},"ts":"1689261370585"} 2023-07-13 15:16:10,587 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-13 15:16:10,595 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, ASSIGN}] 2023-07-13 15:16:10,597 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, ASSIGN 2023-07-13 15:16:10,598 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:10,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 15:16:10,750 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:10,750 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261370750"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261370750"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261370750"}]},"ts":"1689261370750"} 2023-07-13 15:16:10,752 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:10,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 15:16:10,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:10,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 13b3c09b6477682865b23aa8d30465e9, NAME => 'Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:10,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:10,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:10,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:10,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:10,910 INFO [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:10,913 DEBUG [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/f 2023-07-13 15:16:10,913 DEBUG [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/f 2023-07-13 15:16:10,913 INFO [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 13b3c09b6477682865b23aa8d30465e9 columnFamilyName f 2023-07-13 15:16:10,914 INFO [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] regionserver.HStore(310): Store=13b3c09b6477682865b23aa8d30465e9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:10,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:10,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:10,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:10,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:10,922 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 13b3c09b6477682865b23aa8d30465e9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10074498400, jitterRate=-0.06173922121524811}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:10,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 13b3c09b6477682865b23aa8d30465e9: 2023-07-13 15:16:10,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9., pid=83, masterSystemTime=1689261370904 2023-07-13 15:16:10,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:10,925 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:10,925 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:10,925 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261370925"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261370925"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261370925"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261370925"}]},"ts":"1689261370925"} 2023-07-13 15:16:10,929 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-13 15:16:10,929 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,44089,1689261357555 in 175 msec 2023-07-13 15:16:10,931 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-13 15:16:10,931 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, ASSIGN in 334 msec 2023-07-13 15:16:10,933 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:10,933 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261370933"}]},"ts":"1689261370933"} 2023-07-13 15:16:10,935 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-13 15:16:10,939 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:10,942 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 412 msec 2023-07-13 15:16:10,949 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:11,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 15:16:11,137 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-13 15:16:11,137 DEBUG [Listener at localhost/37749] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-13 15:16:11,138 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:11,139 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40971] ipc.CallRunner(144): callId: 277 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:45134 deadline: 1689261431139, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44089 startCode=1689261357555. As of locationSeqNum=95. 2023-07-13 15:16:11,242 DEBUG [hconnection-0x497c82a-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:11,246 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37992, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:11,259 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-13 15:16:11,259 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:11,259 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-13 15:16:11,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-13 15:16:11,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:11,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 15:16:11,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:11,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:11,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-13 15:16:11,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region 13b3c09b6477682865b23aa8d30465e9 to RSGroup bar 2023-07-13 15:16:11,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:11,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:11,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:11,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:11,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 15:16:11,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:11,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, REOPEN/MOVE 2023-07-13 15:16:11,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-13 15:16:11,275 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, REOPEN/MOVE 2023-07-13 15:16:11,276 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:11,276 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261371276"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261371276"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261371276"}]},"ts":"1689261371276"} 2023-07-13 15:16:11,278 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:11,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:11,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 13b3c09b6477682865b23aa8d30465e9, disabling compactions & flushes 2023-07-13 15:16:11,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:11,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:11,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. after waiting 0 ms 2023-07-13 15:16:11,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:11,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:11,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:11,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 13b3c09b6477682865b23aa8d30465e9: 2023-07-13 15:16:11,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 13b3c09b6477682865b23aa8d30465e9 move to jenkins-hbase4.apache.org,32995,1689261357367 record at close sequenceid=2 2023-07-13 15:16:11,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:11,442 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=CLOSED 2023-07-13 15:16:11,442 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261371442"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261371442"}]},"ts":"1689261371442"} 2023-07-13 15:16:11,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-13 15:16:11,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,44089,1689261357555 in 166 msec 2023-07-13 15:16:11,446 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,32995,1689261357367; forceNewPlan=false, retain=false 2023-07-13 15:16:11,596 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:11,597 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:11,597 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261371597"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261371597"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261371597"}]},"ts":"1689261371597"} 2023-07-13 15:16:11,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:11,755 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:11,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 13b3c09b6477682865b23aa8d30465e9, NAME => 'Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:11,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:11,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:11,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:11,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:11,759 INFO [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:11,760 DEBUG [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/f 2023-07-13 15:16:11,760 DEBUG [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/f 2023-07-13 15:16:11,761 INFO [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 13b3c09b6477682865b23aa8d30465e9 columnFamilyName f 2023-07-13 15:16:11,761 INFO [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] regionserver.HStore(310): Store=13b3c09b6477682865b23aa8d30465e9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:11,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:11,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:11,769 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:11,770 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 13b3c09b6477682865b23aa8d30465e9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11862380800, jitterRate=0.10477030277252197}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:11,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 13b3c09b6477682865b23aa8d30465e9: 2023-07-13 15:16:11,771 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9., pid=86, masterSystemTime=1689261371751 2023-07-13 15:16:11,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:11,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:11,773 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:11,773 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261371773"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261371773"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261371773"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261371773"}]},"ts":"1689261371773"} 2023-07-13 15:16:11,776 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-13 15:16:11,777 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,32995,1689261357367 in 176 msec 2023-07-13 15:16:11,778 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, REOPEN/MOVE in 504 msec 2023-07-13 15:16:11,916 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-13 15:16:12,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-13 15:16:12,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-13 15:16:12,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:12,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:12,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:12,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-13 15:16:12,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:12,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-13 15:16:12,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:12,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:50614 deadline: 1689262572284, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-13 15:16:12,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:34377] to rsgroup default 2023-07-13 15:16:12,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:12,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:50614 deadline: 1689262572286, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-13 15:16:12,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-13 15:16:12,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:12,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 15:16:12,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:12,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:12,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-13 15:16:12,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region 13b3c09b6477682865b23aa8d30465e9 to RSGroup default 2023-07-13 15:16:12,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, REOPEN/MOVE 2023-07-13 15:16:12,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 15:16:12,296 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, REOPEN/MOVE 2023-07-13 15:16:12,297 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:12,297 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261372297"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261372297"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261372297"}]},"ts":"1689261372297"} 2023-07-13 15:16:12,303 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:12,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:12,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 13b3c09b6477682865b23aa8d30465e9, disabling compactions & flushes 2023-07-13 15:16:12,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:12,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:12,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. after waiting 0 ms 2023-07-13 15:16:12,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:12,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:12,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:12,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 13b3c09b6477682865b23aa8d30465e9: 2023-07-13 15:16:12,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 13b3c09b6477682865b23aa8d30465e9 move to jenkins-hbase4.apache.org,44089,1689261357555 record at close sequenceid=5 2023-07-13 15:16:12,471 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:12,472 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=CLOSED 2023-07-13 15:16:12,472 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261372472"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261372472"}]},"ts":"1689261372472"} 2023-07-13 15:16:12,479 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-13 15:16:12,479 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,32995,1689261357367 in 175 msec 2023-07-13 15:16:12,482 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:12,633 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:12,633 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261372633"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261372633"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261372633"}]},"ts":"1689261372633"} 2023-07-13 15:16:12,641 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:12,798 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:12,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 13b3c09b6477682865b23aa8d30465e9, NAME => 'Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:12,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:12,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:12,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:12,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:12,800 INFO [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:12,801 DEBUG [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/f 2023-07-13 15:16:12,801 DEBUG [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/f 2023-07-13 15:16:12,802 INFO [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 13b3c09b6477682865b23aa8d30465e9 columnFamilyName f 2023-07-13 15:16:12,802 INFO [StoreOpener-13b3c09b6477682865b23aa8d30465e9-1] regionserver.HStore(310): Store=13b3c09b6477682865b23aa8d30465e9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:12,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:12,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:12,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:12,808 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 13b3c09b6477682865b23aa8d30465e9; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11989961920, jitterRate=0.1166522204875946}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:12,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 13b3c09b6477682865b23aa8d30465e9: 2023-07-13 15:16:12,809 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9., pid=89, masterSystemTime=1689261372793 2023-07-13 15:16:12,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:12,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:12,811 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:12,812 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261372811"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261372811"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261372811"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261372811"}]},"ts":"1689261372811"} 2023-07-13 15:16:12,818 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-13 15:16:12,819 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,44089,1689261357555 in 173 msec 2023-07-13 15:16:12,820 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, REOPEN/MOVE in 525 msec 2023-07-13 15:16:13,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-13 15:16:13,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-13 15:16:13,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:13,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-13 15:16:13,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:13,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:50614 deadline: 1689262573304, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-13 15:16:13,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:34377] to rsgroup default 2023-07-13 15:16:13,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 15:16:13,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:13,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-13 15:16:13,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353, jenkins-hbase4.apache.org,40971,1689261357748] are moved back to bar 2023-07-13 15:16:13,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-13 15:16:13,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:13,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-13 15:16:13,325 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40971] ipc.CallRunner(144): callId: 214 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:35616 deadline: 1689261433325, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44089 startCode=1689261357555. As of locationSeqNum=6. 2023-07-13 15:16:13,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:13,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:13,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,451 INFO [Listener at localhost/37749] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-13 15:16:13,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-13 15:16:13,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-13 15:16:13,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-13 15:16:13,456 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261373456"}]},"ts":"1689261373456"} 2023-07-13 15:16:13,457 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-13 15:16:13,460 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-13 15:16:13,461 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, UNASSIGN}] 2023-07-13 15:16:13,462 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, UNASSIGN 2023-07-13 15:16:13,463 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:13,463 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261373463"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261373463"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261373463"}]},"ts":"1689261373463"} 2023-07-13 15:16:13,465 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:13,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-13 15:16:13,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:13,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 13b3c09b6477682865b23aa8d30465e9, disabling compactions & flushes 2023-07-13 15:16:13,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:13,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:13,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. after waiting 0 ms 2023-07-13 15:16:13,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:13,625 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 15:16:13,625 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9. 2023-07-13 15:16:13,625 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 13b3c09b6477682865b23aa8d30465e9: 2023-07-13 15:16:13,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:13,628 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=13b3c09b6477682865b23aa8d30465e9, regionState=CLOSED 2023-07-13 15:16:13,628 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261373628"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261373628"}]},"ts":"1689261373628"} 2023-07-13 15:16:13,631 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-13 15:16:13,631 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 13b3c09b6477682865b23aa8d30465e9, server=jenkins-hbase4.apache.org,44089,1689261357555 in 164 msec 2023-07-13 15:16:13,634 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-13 15:16:13,634 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=13b3c09b6477682865b23aa8d30465e9, UNASSIGN in 171 msec 2023-07-13 15:16:13,634 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261373634"}]},"ts":"1689261373634"} 2023-07-13 15:16:13,636 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-13 15:16:13,640 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-13 15:16:13,643 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 191 msec 2023-07-13 15:16:13,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-13 15:16:13,758 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-13 15:16:13,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-13 15:16:13,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 15:16:13,762 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 15:16:13,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-13 15:16:13,762 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 15:16:13,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:13,766 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:13,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-13 15:16:13,768 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/recovered.edits] 2023-07-13 15:16:13,774 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/recovered.edits/10.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9/recovered.edits/10.seqid 2023-07-13 15:16:13,775 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testFailRemoveGroup/13b3c09b6477682865b23aa8d30465e9 2023-07-13 15:16:13,775 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-13 15:16:13,777 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 15:16:13,780 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-13 15:16:13,782 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-13 15:16:13,783 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 15:16:13,783 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-13 15:16:13,783 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261373783"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:13,784 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:13,785 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 13b3c09b6477682865b23aa8d30465e9, NAME => 'Group_testFailRemoveGroup,,1689261370528.13b3c09b6477682865b23aa8d30465e9.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:13,785 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-13 15:16:13,785 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261373785"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:13,787 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-13 15:16:13,789 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 15:16:13,790 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 30 msec 2023-07-13 15:16:13,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-13 15:16:13,869 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-13 15:16:13,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:13,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:13,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:13,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:13,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:13,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:13,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:13,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:13,890 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:13,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:13,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:13,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:13,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:13,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:13,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262573902, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:13,903 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:13,904 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:13,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,906 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:13,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:13,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:13,925 INFO [Listener at localhost/37749] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=515 (was 501) Potentially hanging thread: hconnection-0x120ad869-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1514897013-172.31.14.131-1689261351889:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536-prefix:jenkins-hbase4.apache.org,44089,1689261357555.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1471309719_17 at /127.0.0.1:40540 [Receiving block BP-1514897013-172.31.14.131-1689261351889:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x497c82a-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1925825974_17 at /127.0.0.1:60560 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1573316261_17 at /127.0.0.1:40386 [Waiting for operation #13] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1471309719_17 at /127.0.0.1:40530 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1514897013-172.31.14.131-1689261351889:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1514897013-172.31.14.131-1689261351889:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1925825974_17 at /127.0.0.1:36552 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1471309719_17 at /127.0.0.1:48416 [Receiving block BP-1514897013-172.31.14.131-1689261351889:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1925825974_17 at /127.0.0.1:48430 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1471309719_17 at /127.0.0.1:40914 [Receiving block BP-1514897013-172.31.14.131-1689261351889:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=792 (was 789) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 534), ProcessCount=172 (was 172), AvailableMemoryMB=4543 (was 4840) 2023-07-13 15:16:13,926 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-13 15:16:13,943 INFO [Listener at localhost/37749] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=515, OpenFileDescriptor=792, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=172, AvailableMemoryMB=4541 2023-07-13 15:16:13,943 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-13 15:16:13,943 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-13 15:16:13,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:13,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:13,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:13,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:13,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:13,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:13,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:13,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:13,959 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:13,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:13,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:13,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:13,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:13,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:13,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262573972, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:13,973 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:13,977 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:13,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,978 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:13,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:13,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:13,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:13,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:13,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1741005982 2023-07-13 15:16:13,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1741005982 2023-07-13 15:16:13,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:13,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:13,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995] to rsgroup Group_testMultiTableMove_1741005982 2023-07-13 15:16:13,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1741005982 2023-07-13 15:16:13,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:13,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 15:16:13,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367] are moved back to default 2023-07-13 15:16:13,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1741005982 2023-07-13 15:16:13,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:13,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:14,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1741005982 2023-07-13 15:16:14,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:14,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:14,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 15:16:14,006 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:14,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-13 15:16:14,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 15:16:14,008 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:14,009 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1741005982 2023-07-13 15:16:14,009 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:14,009 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:14,014 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:14,016 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:14,016 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e empty. 2023-07-13 15:16:14,017 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:14,017 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-13 15:16:14,031 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:14,033 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => dfce2b128d9d55ce4e699bb5ce2cf69e, NAME => 'GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:14,046 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:14,046 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing dfce2b128d9d55ce4e699bb5ce2cf69e, disabling compactions & flushes 2023-07-13 15:16:14,046 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:14,046 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:14,046 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. after waiting 0 ms 2023-07-13 15:16:14,046 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:14,046 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:14,046 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for dfce2b128d9d55ce4e699bb5ce2cf69e: 2023-07-13 15:16:14,049 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:14,051 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261374050"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261374050"}]},"ts":"1689261374050"} 2023-07-13 15:16:14,052 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:14,053 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:14,053 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261374053"}]},"ts":"1689261374053"} 2023-07-13 15:16:14,054 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-13 15:16:14,059 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:14,059 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:14,059 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:14,059 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:14,059 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:14,059 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, ASSIGN}] 2023-07-13 15:16:14,062 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, ASSIGN 2023-07-13 15:16:14,062 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:14,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 15:16:14,213 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:14,214 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=dfce2b128d9d55ce4e699bb5ce2cf69e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:14,214 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261374214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261374214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261374214"}]},"ts":"1689261374214"} 2023-07-13 15:16:14,216 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure dfce2b128d9d55ce4e699bb5ce2cf69e, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:14,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 15:16:14,372 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:14,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dfce2b128d9d55ce4e699bb5ce2cf69e, NAME => 'GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:14,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:14,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:14,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:14,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:14,374 INFO [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:14,375 DEBUG [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/f 2023-07-13 15:16:14,375 DEBUG [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/f 2023-07-13 15:16:14,376 INFO [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dfce2b128d9d55ce4e699bb5ce2cf69e columnFamilyName f 2023-07-13 15:16:14,377 INFO [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] regionserver.HStore(310): Store=dfce2b128d9d55ce4e699bb5ce2cf69e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:14,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:14,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:14,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:14,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:14,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dfce2b128d9d55ce4e699bb5ce2cf69e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9934940640, jitterRate=-0.07473655045032501}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:14,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dfce2b128d9d55ce4e699bb5ce2cf69e: 2023-07-13 15:16:14,385 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e., pid=96, masterSystemTime=1689261374368 2023-07-13 15:16:14,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:14,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:14,387 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=dfce2b128d9d55ce4e699bb5ce2cf69e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:14,387 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261374387"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261374387"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261374387"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261374387"}]},"ts":"1689261374387"} 2023-07-13 15:16:14,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-13 15:16:14,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure dfce2b128d9d55ce4e699bb5ce2cf69e, server=jenkins-hbase4.apache.org,40971,1689261357748 in 172 msec 2023-07-13 15:16:14,391 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-13 15:16:14,391 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, ASSIGN in 331 msec 2023-07-13 15:16:14,392 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:14,392 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261374392"}]},"ts":"1689261374392"} 2023-07-13 15:16:14,393 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-13 15:16:14,396 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:14,397 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 393 msec 2023-07-13 15:16:14,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 15:16:14,611 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-13 15:16:14,611 DEBUG [Listener at localhost/37749] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-13 15:16:14,611 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:14,615 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-13 15:16:14,616 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:14,616 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-13 15:16:14,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:14,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 15:16:14,621 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:14,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-13 15:16:14,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 15:16:14,627 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:14,628 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1741005982 2023-07-13 15:16:14,628 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:14,629 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:14,631 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:14,634 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:14,634 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6 empty. 2023-07-13 15:16:14,635 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:14,635 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-13 15:16:14,669 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:14,671 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 02b8480f287953298a03f5908a0e03d6, NAME => 'GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:14,689 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:14,689 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 02b8480f287953298a03f5908a0e03d6, disabling compactions & flushes 2023-07-13 15:16:14,689 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:14,689 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:14,689 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. after waiting 0 ms 2023-07-13 15:16:14,689 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:14,689 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:14,689 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 02b8480f287953298a03f5908a0e03d6: 2023-07-13 15:16:14,699 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:14,700 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261374700"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261374700"}]},"ts":"1689261374700"} 2023-07-13 15:16:14,702 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:14,705 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:14,705 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261374705"}]},"ts":"1689261374705"} 2023-07-13 15:16:14,706 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-13 15:16:14,712 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:14,712 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:14,712 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:14,712 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:14,712 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:14,712 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, ASSIGN}] 2023-07-13 15:16:14,715 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, ASSIGN 2023-07-13 15:16:14,716 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:14,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 15:16:14,866 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:14,868 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=02b8480f287953298a03f5908a0e03d6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:14,868 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261374868"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261374868"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261374868"}]},"ts":"1689261374868"} 2023-07-13 15:16:14,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 02b8480f287953298a03f5908a0e03d6, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:14,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 15:16:15,028 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:15,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 02b8480f287953298a03f5908a0e03d6, NAME => 'GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:15,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:15,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:15,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:15,031 INFO [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:15,033 DEBUG [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/f 2023-07-13 15:16:15,033 DEBUG [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/f 2023-07-13 15:16:15,033 INFO [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 02b8480f287953298a03f5908a0e03d6 columnFamilyName f 2023-07-13 15:16:15,034 INFO [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] regionserver.HStore(310): Store=02b8480f287953298a03f5908a0e03d6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:15,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:15,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:15,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:15,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:15,041 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 02b8480f287953298a03f5908a0e03d6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10111402240, jitterRate=-0.05830228328704834}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:15,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 02b8480f287953298a03f5908a0e03d6: 2023-07-13 15:16:15,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6., pid=99, masterSystemTime=1689261375023 2023-07-13 15:16:15,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:15,044 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=02b8480f287953298a03f5908a0e03d6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:15,044 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:15,044 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261375044"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261375044"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261375044"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261375044"}]},"ts":"1689261375044"} 2023-07-13 15:16:15,048 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-13 15:16:15,048 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 02b8480f287953298a03f5908a0e03d6, server=jenkins-hbase4.apache.org,44089,1689261357555 in 176 msec 2023-07-13 15:16:15,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-13 15:16:15,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, ASSIGN in 336 msec 2023-07-13 15:16:15,051 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:15,051 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261375051"}]},"ts":"1689261375051"} 2023-07-13 15:16:15,053 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-13 15:16:15,055 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:15,056 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 437 msec 2023-07-13 15:16:15,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 15:16:15,233 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-13 15:16:15,233 DEBUG [Listener at localhost/37749] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-13 15:16:15,234 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:15,237 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-13 15:16:15,238 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:15,238 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-13 15:16:15,238 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:15,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-13 15:16:15,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:15,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-13 15:16:15,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:15,252 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1741005982 2023-07-13 15:16:15,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1741005982 2023-07-13 15:16:15,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:15,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1741005982 2023-07-13 15:16:15,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:15,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:15,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1741005982 2023-07-13 15:16:15,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region 02b8480f287953298a03f5908a0e03d6 to RSGroup Group_testMultiTableMove_1741005982 2023-07-13 15:16:15,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, REOPEN/MOVE 2023-07-13 15:16:15,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1741005982 2023-07-13 15:16:15,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region dfce2b128d9d55ce4e699bb5ce2cf69e to RSGroup Group_testMultiTableMove_1741005982 2023-07-13 15:16:15,476 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, REOPEN/MOVE 2023-07-13 15:16:15,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, REOPEN/MOVE 2023-07-13 15:16:15,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1741005982, current retry=0 2023-07-13 15:16:15,479 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=02b8480f287953298a03f5908a0e03d6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:15,479 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261375479"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261375479"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261375479"}]},"ts":"1689261375479"} 2023-07-13 15:16:15,480 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, REOPEN/MOVE 2023-07-13 15:16:15,481 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 02b8480f287953298a03f5908a0e03d6, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:15,481 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=dfce2b128d9d55ce4e699bb5ce2cf69e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:15,482 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261375481"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261375481"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261375481"}]},"ts":"1689261375481"} 2023-07-13 15:16:15,484 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure dfce2b128d9d55ce4e699bb5ce2cf69e, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:15,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:15,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 02b8480f287953298a03f5908a0e03d6, disabling compactions & flushes 2023-07-13 15:16:15,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:15,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:15,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. after waiting 0 ms 2023-07-13 15:16:15,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:15,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:15,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dfce2b128d9d55ce4e699bb5ce2cf69e, disabling compactions & flushes 2023-07-13 15:16:15,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:15,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:15,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. after waiting 0 ms 2023-07-13 15:16:15,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:15,643 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:15,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:15,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:15,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 02b8480f287953298a03f5908a0e03d6: 2023-07-13 15:16:15,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 02b8480f287953298a03f5908a0e03d6 move to jenkins-hbase4.apache.org,32995,1689261357367 record at close sequenceid=2 2023-07-13 15:16:15,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:15,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dfce2b128d9d55ce4e699bb5ce2cf69e: 2023-07-13 15:16:15,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding dfce2b128d9d55ce4e699bb5ce2cf69e move to jenkins-hbase4.apache.org,32995,1689261357367 record at close sequenceid=2 2023-07-13 15:16:15,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:15,648 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=02b8480f287953298a03f5908a0e03d6, regionState=CLOSED 2023-07-13 15:16:15,649 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261375648"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375648"}]},"ts":"1689261375648"} 2023-07-13 15:16:15,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:15,649 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=dfce2b128d9d55ce4e699bb5ce2cf69e, regionState=CLOSED 2023-07-13 15:16:15,649 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261375649"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375649"}]},"ts":"1689261375649"} 2023-07-13 15:16:15,655 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-13 15:16:15,655 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 02b8480f287953298a03f5908a0e03d6, server=jenkins-hbase4.apache.org,44089,1689261357555 in 169 msec 2023-07-13 15:16:15,657 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-13 15:16:15,657 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,32995,1689261357367; forceNewPlan=false, retain=false 2023-07-13 15:16:15,657 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure dfce2b128d9d55ce4e699bb5ce2cf69e, server=jenkins-hbase4.apache.org,40971,1689261357748 in 167 msec 2023-07-13 15:16:15,657 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,32995,1689261357367; forceNewPlan=false, retain=false 2023-07-13 15:16:15,808 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=02b8480f287953298a03f5908a0e03d6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:15,808 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=dfce2b128d9d55ce4e699bb5ce2cf69e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:15,808 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261375807"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261375807"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261375807"}]},"ts":"1689261375807"} 2023-07-13 15:16:15,808 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261375807"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261375807"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261375807"}]},"ts":"1689261375807"} 2023-07-13 15:16:15,810 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 02b8480f287953298a03f5908a0e03d6, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:15,811 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure dfce2b128d9d55ce4e699bb5ce2cf69e, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:15,975 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:15,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dfce2b128d9d55ce4e699bb5ce2cf69e, NAME => 'GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:15,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:15,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:15,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:15,989 INFO [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:15,991 DEBUG [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/f 2023-07-13 15:16:15,991 DEBUG [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/f 2023-07-13 15:16:15,991 INFO [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dfce2b128d9d55ce4e699bb5ce2cf69e columnFamilyName f 2023-07-13 15:16:15,993 INFO [StoreOpener-dfce2b128d9d55ce4e699bb5ce2cf69e-1] regionserver.HStore(310): Store=dfce2b128d9d55ce4e699bb5ce2cf69e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:15,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:16,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:16,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:16,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dfce2b128d9d55ce4e699bb5ce2cf69e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9531665920, jitterRate=-0.11229443550109863}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dfce2b128d9d55ce4e699bb5ce2cf69e: 2023-07-13 15:16:16,009 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e., pid=105, masterSystemTime=1689261375962 2023-07-13 15:16:16,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:16,011 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:16,011 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:16,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 02b8480f287953298a03f5908a0e03d6, NAME => 'GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:16,011 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=dfce2b128d9d55ce4e699bb5ce2cf69e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:16,012 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261376011"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376011"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376011"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376011"}]},"ts":"1689261376011"} 2023-07-13 15:16:16,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:16,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:16,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:16,016 INFO [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:16,017 DEBUG [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/f 2023-07-13 15:16:16,017 DEBUG [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/f 2023-07-13 15:16:16,018 INFO [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 02b8480f287953298a03f5908a0e03d6 columnFamilyName f 2023-07-13 15:16:16,019 INFO [StoreOpener-02b8480f287953298a03f5908a0e03d6-1] regionserver.HStore(310): Store=02b8480f287953298a03f5908a0e03d6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-13 15:16:16,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure dfce2b128d9d55ce4e699bb5ce2cf69e, server=jenkins-hbase4.apache.org,32995,1689261357367 in 202 msec 2023-07-13 15:16:16,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:16,021 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, REOPEN/MOVE in 546 msec 2023-07-13 15:16:16,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:16,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:16,027 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 02b8480f287953298a03f5908a0e03d6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12069825120, jitterRate=0.12409006059169769}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 02b8480f287953298a03f5908a0e03d6: 2023-07-13 15:16:16,028 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6., pid=104, masterSystemTime=1689261375962 2023-07-13 15:16:16,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:16,031 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:16,031 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=02b8480f287953298a03f5908a0e03d6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:16,032 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261376031"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376031"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376031"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376031"}]},"ts":"1689261376031"} 2023-07-13 15:16:16,035 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-13 15:16:16,035 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 02b8480f287953298a03f5908a0e03d6, server=jenkins-hbase4.apache.org,32995,1689261357367 in 223 msec 2023-07-13 15:16:16,038 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, REOPEN/MOVE in 562 msec 2023-07-13 15:16:16,411 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:16,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-13 15:16:16,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1741005982. 2023-07-13 15:16:16,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:16,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:16,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:16,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-13 15:16:16,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:16,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-13 15:16:16,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:16,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:16,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:16,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1741005982 2023-07-13 15:16:16,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:16,512 INFO [Listener at localhost/37749] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-13 15:16:16,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-13 15:16:16,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 15:16:16,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-13 15:16:16,520 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261376520"}]},"ts":"1689261376520"} 2023-07-13 15:16:16,521 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-13 15:16:16,523 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-13 15:16:16,528 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, UNASSIGN}] 2023-07-13 15:16:16,529 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, UNASSIGN 2023-07-13 15:16:16,530 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=dfce2b128d9d55ce4e699bb5ce2cf69e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:16,531 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261376530"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376530"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376530"}]},"ts":"1689261376530"} 2023-07-13 15:16:16,533 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure dfce2b128d9d55ce4e699bb5ce2cf69e, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:16,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-13 15:16:16,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:16,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dfce2b128d9d55ce4e699bb5ce2cf69e, disabling compactions & flushes 2023-07-13 15:16:16,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:16,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:16,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. after waiting 0 ms 2023-07-13 15:16:16,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:16,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:16,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e. 2023-07-13 15:16:16,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dfce2b128d9d55ce4e699bb5ce2cf69e: 2023-07-13 15:16:16,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:16,696 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=dfce2b128d9d55ce4e699bb5ce2cf69e, regionState=CLOSED 2023-07-13 15:16:16,696 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261376696"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261376696"}]},"ts":"1689261376696"} 2023-07-13 15:16:16,699 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-13 15:16:16,699 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure dfce2b128d9d55ce4e699bb5ce2cf69e, server=jenkins-hbase4.apache.org,32995,1689261357367 in 164 msec 2023-07-13 15:16:16,700 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-13 15:16:16,700 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfce2b128d9d55ce4e699bb5ce2cf69e, UNASSIGN in 174 msec 2023-07-13 15:16:16,701 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261376701"}]},"ts":"1689261376701"} 2023-07-13 15:16:16,702 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-13 15:16:16,704 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-13 15:16:16,706 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 192 msec 2023-07-13 15:16:16,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-13 15:16:16,822 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-13 15:16:16,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-13 15:16:16,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 15:16:16,826 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 15:16:16,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1741005982' 2023-07-13 15:16:16,826 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 15:16:16,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:16,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1741005982 2023-07-13 15:16:16,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:16,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:16,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-13 15:16:16,834 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:16,835 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/recovered.edits] 2023-07-13 15:16:16,841 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/recovered.edits/7.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e/recovered.edits/7.seqid 2023-07-13 15:16:16,841 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveA/dfce2b128d9d55ce4e699bb5ce2cf69e 2023-07-13 15:16:16,841 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-13 15:16:16,846 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 15:16:16,848 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-13 15:16:16,850 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-13 15:16:16,852 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 15:16:16,852 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-13 15:16:16,852 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261376852"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:16,854 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:16,854 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => dfce2b128d9d55ce4e699bb5ce2cf69e, NAME => 'GrouptestMultiTableMoveA,,1689261374003.dfce2b128d9d55ce4e699bb5ce2cf69e.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:16,854 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-13 15:16:16,854 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261376854"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:16,855 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-13 15:16:16,857 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 15:16:16,859 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 34 msec 2023-07-13 15:16:16,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-13 15:16:16,934 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-13 15:16:16,934 INFO [Listener at localhost/37749] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-13 15:16:16,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-13 15:16:16,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 15:16:16,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-13 15:16:16,939 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261376939"}]},"ts":"1689261376939"} 2023-07-13 15:16:16,941 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-13 15:16:16,943 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-13 15:16:16,943 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, UNASSIGN}] 2023-07-13 15:16:16,945 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, UNASSIGN 2023-07-13 15:16:16,946 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=02b8480f287953298a03f5908a0e03d6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:16,946 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261376946"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376946"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376946"}]},"ts":"1689261376946"} 2023-07-13 15:16:16,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 02b8480f287953298a03f5908a0e03d6, server=jenkins-hbase4.apache.org,32995,1689261357367}] 2023-07-13 15:16:17,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-13 15:16:17,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:17,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 02b8480f287953298a03f5908a0e03d6, disabling compactions & flushes 2023-07-13 15:16:17,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:17,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:17,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. after waiting 0 ms 2023-07-13 15:16:17,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:17,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:17,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6. 2023-07-13 15:16:17,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 02b8480f287953298a03f5908a0e03d6: 2023-07-13 15:16:17,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:17,108 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=02b8480f287953298a03f5908a0e03d6, regionState=CLOSED 2023-07-13 15:16:17,108 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689261377107"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377107"}]},"ts":"1689261377107"} 2023-07-13 15:16:17,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-13 15:16:17,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 02b8480f287953298a03f5908a0e03d6, server=jenkins-hbase4.apache.org,32995,1689261357367 in 162 msec 2023-07-13 15:16:17,112 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-13 15:16:17,112 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=02b8480f287953298a03f5908a0e03d6, UNASSIGN in 167 msec 2023-07-13 15:16:17,112 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261377112"}]},"ts":"1689261377112"} 2023-07-13 15:16:17,114 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-13 15:16:17,115 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-13 15:16:17,117 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 181 msec 2023-07-13 15:16:17,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-13 15:16:17,242 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-13 15:16:17,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-13 15:16:17,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 15:16:17,245 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 15:16:17,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1741005982' 2023-07-13 15:16:17,246 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 15:16:17,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1741005982 2023-07-13 15:16:17,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:17,250 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:17,252 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/recovered.edits] 2023-07-13 15:16:17,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-13 15:16:17,259 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/recovered.edits/7.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6/recovered.edits/7.seqid 2023-07-13 15:16:17,260 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/GrouptestMultiTableMoveB/02b8480f287953298a03f5908a0e03d6 2023-07-13 15:16:17,260 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-13 15:16:17,262 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 15:16:17,264 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-13 15:16:17,266 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-13 15:16:17,267 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 15:16:17,267 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-13 15:16:17,267 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261377267"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:17,268 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:17,268 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 02b8480f287953298a03f5908a0e03d6, NAME => 'GrouptestMultiTableMoveB,,1689261374618.02b8480f287953298a03f5908a0e03d6.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:17,268 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-13 15:16:17,269 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261377268"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:17,270 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-13 15:16:17,271 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 15:16:17,272 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 29 msec 2023-07-13 15:16:17,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-13 15:16:17,358 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-13 15:16:17,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:17,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:17,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:17,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995] to rsgroup default 2023-07-13 15:16:17,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1741005982 2023-07-13 15:16:17,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:17,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1741005982, current retry=0 2023-07-13 15:16:17,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367] are moved back to Group_testMultiTableMove_1741005982 2023-07-13 15:16:17,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1741005982 => default 2023-07-13 15:16:17,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1741005982 2023-07-13 15:16:17,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:17,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:17,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:17,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:17,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:17,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:17,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:17,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:17,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:17,404 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:17,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:17,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:17,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:17,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:17,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:17,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262577437, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:17,438 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:17,440 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:17,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,442 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:17,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:17,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,463 INFO [Listener at localhost/37749] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=512 (was 515), OpenFileDescriptor=788 (was 792), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 499), ProcessCount=172 (was 172), AvailableMemoryMB=4331 (was 4541) 2023-07-13 15:16:17,464 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-13 15:16:17,482 INFO [Listener at localhost/37749] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512, OpenFileDescriptor=788, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=172, AvailableMemoryMB=4330 2023-07-13 15:16:17,482 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-13 15:16:17,484 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-13 15:16:17,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:17,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:17,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:17,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:17,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:17,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:17,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:17,500 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:17,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:17,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:17,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:17,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:17,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:17,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262577511, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:17,512 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:17,514 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:17,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,515 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:17,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:17,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:17,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-13 15:16:17,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 15:16:17,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:17,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:17,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377] to rsgroup oldGroup 2023-07-13 15:16:17,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 15:16:17,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:17,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 15:16:17,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353] are moved back to default 2023-07-13 15:16:17,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-13 15:16:17,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-13 15:16:17,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-13 15:16:17,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:17,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-13 15:16:17,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-13 15:16:17,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 15:16:17,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:17,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:17,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40971] to rsgroup anotherRSGroup 2023-07-13 15:16:17,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-13 15:16:17,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 15:16:17,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:17,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 15:16:17,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40971,1689261357748] are moved back to default 2023-07-13 15:16:17,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-13 15:16:17,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-13 15:16:17,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-13 15:16:17,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-13 15:16:17,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:17,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:50614 deadline: 1689262577575, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-13 15:16:17,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-13 15:16:17,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:17,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:50614 deadline: 1689262577578, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-13 15:16:17,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-13 15:16:17,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:17,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:50614 deadline: 1689262577579, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-13 15:16:17,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-13 15:16:17,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:17,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:50614 deadline: 1689262577581, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-13 15:16:17,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:17,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:17,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:17,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40971] to rsgroup default 2023-07-13 15:16:17,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-13 15:16:17,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 15:16:17,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:17,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-13 15:16:17,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40971,1689261357748] are moved back to anotherRSGroup 2023-07-13 15:16:17,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-13 15:16:17,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-13 15:16:17,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 15:16:17,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-13 15:16:17,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:17,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:17,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:17,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:17,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377] to rsgroup default 2023-07-13 15:16:17,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 15:16:17,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:17,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-13 15:16:17,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353] are moved back to oldGroup 2023-07-13 15:16:17,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-13 15:16:17,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-13 15:16:17,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:17,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:17,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:17,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:17,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:17,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:17,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:17,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:17,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:17,658 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:17,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:17,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:17,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:17,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:17,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:17,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262577677, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:17,688 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:17,690 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:17,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,692 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:17,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:17,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,729 INFO [Listener at localhost/37749] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=515 (was 512) Potentially hanging thread: hconnection-0x120ad869-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=788 (was 788), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 499), ProcessCount=172 (was 172), AvailableMemoryMB=4329 (was 4330) 2023-07-13 15:16:17,730 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-13 15:16:17,752 INFO [Listener at localhost/37749] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=515, OpenFileDescriptor=788, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=172, AvailableMemoryMB=4328 2023-07-13 15:16:17,753 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-13 15:16:17,753 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-13 15:16:17,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:17,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:17,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:17,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:17,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:17,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:17,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:17,774 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:17,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:17,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:17,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:17,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:17,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:17,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262577788, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:17,789 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:17,790 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:17,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,792 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:17,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:17,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:17,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-13 15:16:17,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 15:16:17,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:17,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:17,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377] to rsgroup oldgroup 2023-07-13 15:16:17,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 15:16:17,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:17,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 15:16:17,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353] are moved back to default 2023-07-13 15:16:17,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-13 15:16:17,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:17,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:17,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:17,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-13 15:16:17,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:17,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:17,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-13 15:16:17,826 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:17,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-13 15:16:17,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 15:16:17,828 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 15:16:17,829 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,829 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,830 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:17,832 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:17,834 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:17,835 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7 empty. 2023-07-13 15:16:17,835 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:17,835 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-13 15:16:17,854 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:17,856 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => baa43973e5bda9f8cd7ce215ea0de4f7, NAME => 'testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:17,871 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:17,872 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing baa43973e5bda9f8cd7ce215ea0de4f7, disabling compactions & flushes 2023-07-13 15:16:17,872 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:17,872 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:17,872 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. after waiting 0 ms 2023-07-13 15:16:17,872 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:17,872 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:17,872 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for baa43973e5bda9f8cd7ce215ea0de4f7: 2023-07-13 15:16:17,875 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:17,879 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261377879"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377879"}]},"ts":"1689261377879"} 2023-07-13 15:16:17,880 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:17,881 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:17,881 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261377881"}]},"ts":"1689261377881"} 2023-07-13 15:16:17,882 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-13 15:16:17,886 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:17,886 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:17,886 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:17,886 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:17,887 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, ASSIGN}] 2023-07-13 15:16:17,889 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, ASSIGN 2023-07-13 15:16:17,890 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:17,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 15:16:18,040 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:18,042 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:18,042 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261378042"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261378042"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261378042"}]},"ts":"1689261378042"} 2023-07-13 15:16:18,044 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:18,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 15:16:18,201 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => baa43973e5bda9f8cd7ce215ea0de4f7, NAME => 'testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:18,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:18,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,203 INFO [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,204 DEBUG [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/tr 2023-07-13 15:16:18,204 DEBUG [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/tr 2023-07-13 15:16:18,205 INFO [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region baa43973e5bda9f8cd7ce215ea0de4f7 columnFamilyName tr 2023-07-13 15:16:18,206 INFO [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] regionserver.HStore(310): Store=baa43973e5bda9f8cd7ce215ea0de4f7/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:18,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:18,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened baa43973e5bda9f8cd7ce215ea0de4f7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11623976800, jitterRate=0.08256720006465912}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:18,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for baa43973e5bda9f8cd7ce215ea0de4f7: 2023-07-13 15:16:18,214 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7., pid=116, masterSystemTime=1689261378197 2023-07-13 15:16:18,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,215 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,215 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:18,216 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261378215"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261378215"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261378215"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261378215"}]},"ts":"1689261378215"} 2023-07-13 15:16:18,218 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-13 15:16:18,219 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,40971,1689261357748 in 173 msec 2023-07-13 15:16:18,220 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-13 15:16:18,220 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, ASSIGN in 332 msec 2023-07-13 15:16:18,221 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:18,221 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261378221"}]},"ts":"1689261378221"} 2023-07-13 15:16:18,223 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-13 15:16:18,226 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:18,228 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 404 msec 2023-07-13 15:16:18,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 15:16:18,432 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-13 15:16:18,432 DEBUG [Listener at localhost/37749] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-13 15:16:18,433 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:18,436 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-13 15:16:18,436 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:18,436 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-13 15:16:18,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-13 15:16:18,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 15:16:18,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:18,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:18,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:18,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-13 15:16:18,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region baa43973e5bda9f8cd7ce215ea0de4f7 to RSGroup oldgroup 2023-07-13 15:16:18,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:18,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:18,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:18,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:18,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:18,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, REOPEN/MOVE 2023-07-13 15:16:18,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-13 15:16:18,446 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, REOPEN/MOVE 2023-07-13 15:16:18,446 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:18,447 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261378446"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261378446"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261378446"}]},"ts":"1689261378446"} 2023-07-13 15:16:18,451 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:18,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing baa43973e5bda9f8cd7ce215ea0de4f7, disabling compactions & flushes 2023-07-13 15:16:18,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. after waiting 0 ms 2023-07-13 15:16:18,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:18,610 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,610 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for baa43973e5bda9f8cd7ce215ea0de4f7: 2023-07-13 15:16:18,610 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding baa43973e5bda9f8cd7ce215ea0de4f7 move to jenkins-hbase4.apache.org,34377,1689261361353 record at close sequenceid=2 2023-07-13 15:16:18,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,611 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=CLOSED 2023-07-13 15:16:18,612 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261378611"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261378611"}]},"ts":"1689261378611"} 2023-07-13 15:16:18,614 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-13 15:16:18,614 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,40971,1689261357748 in 162 msec 2023-07-13 15:16:18,615 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34377,1689261361353; forceNewPlan=false, retain=false 2023-07-13 15:16:18,765 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:18,765 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:18,766 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261378765"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261378765"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261378765"}]},"ts":"1689261378765"} 2023-07-13 15:16:18,767 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:18,927 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => baa43973e5bda9f8cd7ce215ea0de4f7, NAME => 'testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:18,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:18,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,931 INFO [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,932 DEBUG [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/tr 2023-07-13 15:16:18,932 DEBUG [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/tr 2023-07-13 15:16:18,932 INFO [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region baa43973e5bda9f8cd7ce215ea0de4f7 columnFamilyName tr 2023-07-13 15:16:18,933 INFO [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] regionserver.HStore(310): Store=baa43973e5bda9f8cd7ce215ea0de4f7/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:18,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:18,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened baa43973e5bda9f8cd7ce215ea0de4f7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11409712800, jitterRate=0.06261231005191803}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:18,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for baa43973e5bda9f8cd7ce215ea0de4f7: 2023-07-13 15:16:18,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7., pid=119, masterSystemTime=1689261378919 2023-07-13 15:16:18,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:18,941 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:18,941 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261378941"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261378941"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261378941"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261378941"}]},"ts":"1689261378941"} 2023-07-13 15:16:18,945 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-13 15:16:18,945 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,34377,1689261361353 in 176 msec 2023-07-13 15:16:18,950 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, REOPEN/MOVE in 500 msec 2023-07-13 15:16:19,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-13 15:16:19,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-13 15:16:19,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:19,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:19,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:19,452 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:19,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-13 15:16:19,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:19,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-13 15:16:19,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:19,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-13 15:16:19,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:19,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:19,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:19,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-13 15:16:19,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 15:16:19,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 15:16:19,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:19,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:19,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:19,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:19,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:19,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:19,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40971] to rsgroup normal 2023-07-13 15:16:19,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 15:16:19,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 15:16:19,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:19,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:19,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:19,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 15:16:19,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40971,1689261357748] are moved back to default 2023-07-13 15:16:19,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-13 15:16:19,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:19,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:19,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:19,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-13 15:16:19,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:19,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:19,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-13 15:16:19,487 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:19,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-13 15:16:19,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 15:16:19,489 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 15:16:19,489 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 15:16:19,490 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:19,490 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:19,490 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:19,492 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:19,494 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:19,495 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059 empty. 2023-07-13 15:16:19,495 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:19,495 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-13 15:16:19,512 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:19,513 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0d8e2cb2b78a281359de79ba388b0059, NAME => 'unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:19,528 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:19,528 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 0d8e2cb2b78a281359de79ba388b0059, disabling compactions & flushes 2023-07-13 15:16:19,528 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:19,528 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:19,528 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. after waiting 0 ms 2023-07-13 15:16:19,528 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:19,528 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:19,528 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 0d8e2cb2b78a281359de79ba388b0059: 2023-07-13 15:16:19,530 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:19,531 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261379531"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261379531"}]},"ts":"1689261379531"} 2023-07-13 15:16:19,537 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:19,538 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:19,538 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261379538"}]},"ts":"1689261379538"} 2023-07-13 15:16:19,540 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-13 15:16:19,543 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, ASSIGN}] 2023-07-13 15:16:19,544 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, ASSIGN 2023-07-13 15:16:19,545 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:19,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 15:16:19,657 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-13 15:16:19,697 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:19,697 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261379696"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261379696"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261379696"}]},"ts":"1689261379696"} 2023-07-13 15:16:19,698 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:19,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 15:16:19,854 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:19,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0d8e2cb2b78a281359de79ba388b0059, NAME => 'unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:19,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:19,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:19,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:19,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:19,859 INFO [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:19,860 DEBUG [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/ut 2023-07-13 15:16:19,860 DEBUG [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/ut 2023-07-13 15:16:19,860 INFO [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d8e2cb2b78a281359de79ba388b0059 columnFamilyName ut 2023-07-13 15:16:19,861 INFO [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] regionserver.HStore(310): Store=0d8e2cb2b78a281359de79ba388b0059/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:19,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:19,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:19,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:19,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:19,868 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0d8e2cb2b78a281359de79ba388b0059; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9500333120, jitterRate=-0.11521252989768982}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:19,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0d8e2cb2b78a281359de79ba388b0059: 2023-07-13 15:16:19,869 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059., pid=122, masterSystemTime=1689261379850 2023-07-13 15:16:19,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:19,870 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:19,870 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:19,871 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261379870"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261379870"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261379870"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261379870"}]},"ts":"1689261379870"} 2023-07-13 15:16:19,877 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-13 15:16:19,877 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,44089,1689261357555 in 177 msec 2023-07-13 15:16:19,878 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-13 15:16:19,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, ASSIGN in 334 msec 2023-07-13 15:16:19,879 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:19,879 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261379879"}]},"ts":"1689261379879"} 2023-07-13 15:16:19,880 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-13 15:16:19,887 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:19,888 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 403 msec 2023-07-13 15:16:20,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 15:16:20,091 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-13 15:16:20,092 DEBUG [Listener at localhost/37749] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-13 15:16:20,092 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:20,095 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-13 15:16:20,095 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:20,095 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-13 15:16:20,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-13 15:16:20,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 15:16:20,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 15:16:20,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:20,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:20,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:20,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-13 15:16:20,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region 0d8e2cb2b78a281359de79ba388b0059 to RSGroup normal 2023-07-13 15:16:20,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, REOPEN/MOVE 2023-07-13 15:16:20,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-13 15:16:20,103 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, REOPEN/MOVE 2023-07-13 15:16:20,104 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:20,104 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261380104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261380104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261380104"}]},"ts":"1689261380104"} 2023-07-13 15:16:20,105 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:20,257 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:20,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0d8e2cb2b78a281359de79ba388b0059, disabling compactions & flushes 2023-07-13 15:16:20,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:20,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:20,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. after waiting 0 ms 2023-07-13 15:16:20,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:20,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:20,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:20,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0d8e2cb2b78a281359de79ba388b0059: 2023-07-13 15:16:20,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0d8e2cb2b78a281359de79ba388b0059 move to jenkins-hbase4.apache.org,40971,1689261357748 record at close sequenceid=2 2023-07-13 15:16:20,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:20,265 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=CLOSED 2023-07-13 15:16:20,265 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261380265"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261380265"}]},"ts":"1689261380265"} 2023-07-13 15:16:20,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-13 15:16:20,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,44089,1689261357555 in 162 msec 2023-07-13 15:16:20,268 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:20,419 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:20,419 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261380419"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261380419"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261380419"}]},"ts":"1689261380419"} 2023-07-13 15:16:20,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:20,576 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:20,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0d8e2cb2b78a281359de79ba388b0059, NAME => 'unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:20,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:20,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:20,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:20,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:20,578 INFO [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:20,579 DEBUG [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/ut 2023-07-13 15:16:20,579 DEBUG [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/ut 2023-07-13 15:16:20,579 INFO [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d8e2cb2b78a281359de79ba388b0059 columnFamilyName ut 2023-07-13 15:16:20,580 INFO [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] regionserver.HStore(310): Store=0d8e2cb2b78a281359de79ba388b0059/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:20,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:20,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:20,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:20,586 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0d8e2cb2b78a281359de79ba388b0059; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10345696640, jitterRate=-0.03648191690444946}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:20,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0d8e2cb2b78a281359de79ba388b0059: 2023-07-13 15:16:20,586 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059., pid=125, masterSystemTime=1689261380572 2023-07-13 15:16:20,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:20,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:20,588 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:20,589 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261380588"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261380588"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261380588"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261380588"}]},"ts":"1689261380588"} 2023-07-13 15:16:20,591 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-13 15:16:20,592 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,40971,1689261357748 in 170 msec 2023-07-13 15:16:20,593 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, REOPEN/MOVE in 489 msec 2023-07-13 15:16:21,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-13 15:16:21,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-13 15:16:21,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:21,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:21,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:21,110 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:21,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-13 15:16:21,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:21,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-13 15:16:21,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:21,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-13 15:16:21,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:21,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-13 15:16:21,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 15:16:21,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:21,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 15:16:21,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-13 15:16:21,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-13 15:16:21,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:21,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:21,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-13 15:16:21,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:21,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-13 15:16:21,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:21,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-13 15:16:21,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:21,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:21,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:21,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-13 15:16:21,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 15:16:21,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:21,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 15:16:21,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:21,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-13 15:16:21,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region 0d8e2cb2b78a281359de79ba388b0059 to RSGroup default 2023-07-13 15:16:21,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, REOPEN/MOVE 2023-07-13 15:16:21,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 15:16:21,145 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, REOPEN/MOVE 2023-07-13 15:16:21,146 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:21,146 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261381146"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261381146"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261381146"}]},"ts":"1689261381146"} 2023-07-13 15:16:21,147 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:21,300 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:21,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0d8e2cb2b78a281359de79ba388b0059, disabling compactions & flushes 2023-07-13 15:16:21,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:21,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:21,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. after waiting 0 ms 2023-07-13 15:16:21,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:21,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:21,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:21,306 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0d8e2cb2b78a281359de79ba388b0059: 2023-07-13 15:16:21,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0d8e2cb2b78a281359de79ba388b0059 move to jenkins-hbase4.apache.org,44089,1689261357555 record at close sequenceid=5 2023-07-13 15:16:21,307 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:21,308 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=CLOSED 2023-07-13 15:16:21,308 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261381308"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261381308"}]},"ts":"1689261381308"} 2023-07-13 15:16:21,310 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-13 15:16:21,310 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,40971,1689261357748 in 162 msec 2023-07-13 15:16:21,311 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:21,461 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:21,462 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261381461"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261381461"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261381461"}]},"ts":"1689261381461"} 2023-07-13 15:16:21,463 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:21,478 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:21,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:21,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0d8e2cb2b78a281359de79ba388b0059, NAME => 'unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:21,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:21,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:21,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:21,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:21,624 INFO [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:21,625 DEBUG [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/ut 2023-07-13 15:16:21,625 DEBUG [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/ut 2023-07-13 15:16:21,625 INFO [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d8e2cb2b78a281359de79ba388b0059 columnFamilyName ut 2023-07-13 15:16:21,626 INFO [StoreOpener-0d8e2cb2b78a281359de79ba388b0059-1] regionserver.HStore(310): Store=0d8e2cb2b78a281359de79ba388b0059/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:21,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:21,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:21,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:21,631 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0d8e2cb2b78a281359de79ba388b0059; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11556352640, jitterRate=0.07626920938491821}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:21,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0d8e2cb2b78a281359de79ba388b0059: 2023-07-13 15:16:21,632 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059., pid=128, masterSystemTime=1689261381615 2023-07-13 15:16:21,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:21,633 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:21,634 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=0d8e2cb2b78a281359de79ba388b0059, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:21,634 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689261381634"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261381634"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261381634"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261381634"}]},"ts":"1689261381634"} 2023-07-13 15:16:21,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-13 15:16:21,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 0d8e2cb2b78a281359de79ba388b0059, server=jenkins-hbase4.apache.org,44089,1689261357555 in 172 msec 2023-07-13 15:16:21,637 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=0d8e2cb2b78a281359de79ba388b0059, REOPEN/MOVE in 492 msec 2023-07-13 15:16:22,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-13 15:16:22,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-13 15:16:22,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:22,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40971] to rsgroup default 2023-07-13 15:16:22,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 15:16:22,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:22,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:22,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 15:16:22,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:22,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-13 15:16:22,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40971,1689261357748] are moved back to normal 2023-07-13 15:16:22,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-13 15:16:22,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:22,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-13 15:16:22,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:22,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:22,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 15:16:22,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-13 15:16:22,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:22,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:22,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:22,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:22,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:22,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:22,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:22,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:22,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 15:16:22,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:22,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:22,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-13 15:16:22,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:22,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 15:16:22,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:22,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-13 15:16:22,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(345): Moving region baa43973e5bda9f8cd7ce215ea0de4f7 to RSGroup default 2023-07-13 15:16:22,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, REOPEN/MOVE 2023-07-13 15:16:22,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 15:16:22,176 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, REOPEN/MOVE 2023-07-13 15:16:22,177 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:22,177 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261382177"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261382177"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261382177"}]},"ts":"1689261382177"} 2023-07-13 15:16:22,178 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,34377,1689261361353}] 2023-07-13 15:16:22,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:22,332 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing baa43973e5bda9f8cd7ce215ea0de4f7, disabling compactions & flushes 2023-07-13 15:16:22,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:22,332 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:22,332 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. after waiting 0 ms 2023-07-13 15:16:22,332 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:22,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:22,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:22,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for baa43973e5bda9f8cd7ce215ea0de4f7: 2023-07-13 15:16:22,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding baa43973e5bda9f8cd7ce215ea0de4f7 move to jenkins-hbase4.apache.org,40971,1689261357748 record at close sequenceid=5 2023-07-13 15:16:22,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:22,340 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=CLOSED 2023-07-13 15:16:22,340 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261382340"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261382340"}]},"ts":"1689261382340"} 2023-07-13 15:16:22,343 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-13 15:16:22,343 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,34377,1689261361353 in 163 msec 2023-07-13 15:16:22,343 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:22,493 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:22,494 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:22,494 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261382494"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261382494"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261382494"}]},"ts":"1689261382494"} 2023-07-13 15:16:22,496 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:22,651 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:22,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => baa43973e5bda9f8cd7ce215ea0de4f7, NAME => 'testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:22,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:22,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:22,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:22,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:22,653 INFO [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:22,654 DEBUG [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/tr 2023-07-13 15:16:22,654 DEBUG [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/tr 2023-07-13 15:16:22,655 INFO [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region baa43973e5bda9f8cd7ce215ea0de4f7 columnFamilyName tr 2023-07-13 15:16:22,655 INFO [StoreOpener-baa43973e5bda9f8cd7ce215ea0de4f7-1] regionserver.HStore(310): Store=baa43973e5bda9f8cd7ce215ea0de4f7/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:22,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:22,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:22,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:22,661 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened baa43973e5bda9f8cd7ce215ea0de4f7; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10806926400, jitterRate=0.006473451852798462}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:22,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for baa43973e5bda9f8cd7ce215ea0de4f7: 2023-07-13 15:16:22,662 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7., pid=131, masterSystemTime=1689261382647 2023-07-13 15:16:22,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:22,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:22,664 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=baa43973e5bda9f8cd7ce215ea0de4f7, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:22,664 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689261382664"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261382664"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261382664"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261382664"}]},"ts":"1689261382664"} 2023-07-13 15:16:22,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-13 15:16:22,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure baa43973e5bda9f8cd7ce215ea0de4f7, server=jenkins-hbase4.apache.org,40971,1689261357748 in 170 msec 2023-07-13 15:16:22,668 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=baa43973e5bda9f8cd7ce215ea0de4f7, REOPEN/MOVE in 492 msec 2023-07-13 15:16:23,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-13 15:16:23,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-13 15:16:23,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:23,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377] to rsgroup default 2023-07-13 15:16:23,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 15:16:23,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:23,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-13 15:16:23,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353] are moved back to newgroup 2023-07-13 15:16:23,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-13 15:16:23,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:23,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-13 15:16:23,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:23,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:23,195 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:23,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:23,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:23,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:23,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:23,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:23,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262583212, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:23,213 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:23,215 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:23,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,216 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:23,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:23,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:23,240 INFO [Listener at localhost/37749] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=511 (was 515), OpenFileDescriptor=780 (was 788), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=459 (was 499), ProcessCount=172 (was 172), AvailableMemoryMB=4179 (was 4328) 2023-07-13 15:16:23,240 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-13 15:16:23,259 INFO [Listener at localhost/37749] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=511, OpenFileDescriptor=780, MaxFileDescriptor=60000, SystemLoadAverage=459, ProcessCount=172, AvailableMemoryMB=4179 2023-07-13 15:16:23,260 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-13 15:16:23,260 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-13 15:16:23,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:23,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:23,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:23,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:23,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:23,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:23,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:23,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:23,276 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:23,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:23,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:23,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:23,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:23,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:23,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262583288, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:23,288 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:23,290 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:23,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,292 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:23,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:23,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:23,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-13 15:16:23,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:23,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-13 15:16:23,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-13 15:16:23,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-13 15:16:23,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:23,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-13 15:16:23,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:23,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:50614 deadline: 1689262583301, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-13 15:16:23,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-13 15:16:23,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:23,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:50614 deadline: 1689262583304, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-13 15:16:23,307 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-13 15:16:23,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-13 15:16:23,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-13 15:16:23,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:23,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:50614 deadline: 1689262583312, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-13 15:16:23,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:23,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:23,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:23,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:23,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:23,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:23,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:23,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:23,326 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:23,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:23,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:23,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:23,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:23,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:23,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262583336, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:23,339 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:23,340 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:23,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,341 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:23,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:23,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:23,359 INFO [Listener at localhost/37749] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=515 (was 511) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-28 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9d6c10f-shared-pool-27 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=780 (was 780), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=459 (was 459), ProcessCount=172 (was 172), AvailableMemoryMB=4177 (was 4179) 2023-07-13 15:16:23,359 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-13 15:16:23,382 INFO [Listener at localhost/37749] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=515, OpenFileDescriptor=780, MaxFileDescriptor=60000, SystemLoadAverage=459, ProcessCount=172, AvailableMemoryMB=4165 2023-07-13 15:16:23,382 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-13 15:16:23,404 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-13 15:16:23,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:23,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:23,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:23,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:23,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:23,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:23,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:23,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:23,421 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:23,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:23,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:23,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:23,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:23,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:23,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262583432, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:23,433 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:23,435 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:23,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,436 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:23,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:23,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:23,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:23,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:23,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1332351599 2023-07-13 15:16:23,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1332351599 2023-07-13 15:16:23,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:23,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:23,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377] to rsgroup Group_testDisabledTableMove_1332351599 2023-07-13 15:16:23,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1332351599 2023-07-13 15:16:23,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:23,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 15:16:23,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353] are moved back to default 2023-07-13 15:16:23,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1332351599 2023-07-13 15:16:23,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:23,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1332351599 2023-07-13 15:16:23,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:23,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:23,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-13 15:16:23,471 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:23,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-13 15:16:23,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-13 15:16:23,473 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,473 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1332351599 2023-07-13 15:16:23,473 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,474 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:23,475 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:23,479 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:23,479 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:23,479 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:23,479 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:23,479 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:23,480 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d empty. 2023-07-13 15:16:23,480 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb empty. 2023-07-13 15:16:23,480 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748 empty. 2023-07-13 15:16:23,480 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0 empty. 2023-07-13 15:16:23,480 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:23,480 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f empty. 2023-07-13 15:16:23,480 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:23,480 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:23,481 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:23,481 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:23,481 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-13 15:16:23,496 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:23,497 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3302f521260753d45664aa61e7d498eb, NAME => 'Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:23,497 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 1c42d5ddb0343f4760f1a2e0442fb748, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:23,498 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => f30dad9956b9de184a6ee13566d9183f, NAME => 'Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:23,519 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,519 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing f30dad9956b9de184a6ee13566d9183f, disabling compactions & flushes 2023-07-13 15:16:23,519 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:23,519 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:23,519 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. after waiting 0 ms 2023-07-13 15:16:23,519 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,519 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 3302f521260753d45664aa61e7d498eb, disabling compactions & flushes 2023-07-13 15:16:23,519 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:23,520 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:23,520 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for f30dad9956b9de184a6ee13566d9183f: 2023-07-13 15:16:23,519 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:23,520 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:23,520 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. after waiting 0 ms 2023-07-13 15:16:23,520 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 6e624d90aee9b9c61f45e33e58a1897d, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:23,520 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:23,520 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:23,520 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 3302f521260753d45664aa61e7d498eb: 2023-07-13 15:16:23,521 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3e7d1b1c555592d8a843f5e732993af0, NAME => 'Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp 2023-07-13 15:16:23,530 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,530 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 1c42d5ddb0343f4760f1a2e0442fb748, disabling compactions & flushes 2023-07-13 15:16:23,530 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:23,530 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:23,530 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. after waiting 0 ms 2023-07-13 15:16:23,530 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:23,531 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:23,531 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 1c42d5ddb0343f4760f1a2e0442fb748: 2023-07-13 15:16:23,544 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,544 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 6e624d90aee9b9c61f45e33e58a1897d, disabling compactions & flushes 2023-07-13 15:16:23,544 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:23,544 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:23,544 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. after waiting 0 ms 2023-07-13 15:16:23,544 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:23,544 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:23,544 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 6e624d90aee9b9c61f45e33e58a1897d: 2023-07-13 15:16:23,547 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,547 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 3e7d1b1c555592d8a843f5e732993af0, disabling compactions & flushes 2023-07-13 15:16:23,547 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:23,547 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:23,547 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. after waiting 0 ms 2023-07-13 15:16:23,547 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:23,547 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:23,547 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 3e7d1b1c555592d8a843f5e732993af0: 2023-07-13 15:16:23,549 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:23,550 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261383550"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261383550"}]},"ts":"1689261383550"} 2023-07-13 15:16:23,550 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261383550"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261383550"}]},"ts":"1689261383550"} 2023-07-13 15:16:23,551 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261383550"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261383550"}]},"ts":"1689261383550"} 2023-07-13 15:16:23,551 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261383550"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261383550"}]},"ts":"1689261383550"} 2023-07-13 15:16:23,551 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261383550"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261383550"}]},"ts":"1689261383550"} 2023-07-13 15:16:23,553 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-13 15:16:23,554 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:23,554 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261383554"}]},"ts":"1689261383554"} 2023-07-13 15:16:23,556 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-13 15:16:23,561 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:23,561 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:23,561 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:23,561 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:23,561 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3302f521260753d45664aa61e7d498eb, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f30dad9956b9de184a6ee13566d9183f, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1c42d5ddb0343f4760f1a2e0442fb748, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6e624d90aee9b9c61f45e33e58a1897d, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7d1b1c555592d8a843f5e732993af0, ASSIGN}] 2023-07-13 15:16:23,563 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1c42d5ddb0343f4760f1a2e0442fb748, ASSIGN 2023-07-13 15:16:23,563 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f30dad9956b9de184a6ee13566d9183f, ASSIGN 2023-07-13 15:16:23,564 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6e624d90aee9b9c61f45e33e58a1897d, ASSIGN 2023-07-13 15:16:23,564 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3302f521260753d45664aa61e7d498eb, ASSIGN 2023-07-13 15:16:23,564 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1c42d5ddb0343f4760f1a2e0442fb748, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:23,564 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7d1b1c555592d8a843f5e732993af0, ASSIGN 2023-07-13 15:16:23,564 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f30dad9956b9de184a6ee13566d9183f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:23,564 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6e624d90aee9b9c61f45e33e58a1897d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:23,564 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3302f521260753d45664aa61e7d498eb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44089,1689261357555; forceNewPlan=false, retain=false 2023-07-13 15:16:23,565 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7d1b1c555592d8a843f5e732993af0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40971,1689261357748; forceNewPlan=false, retain=false 2023-07-13 15:16:23,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-13 15:16:23,714 INFO [jenkins-hbase4:33053] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 15:16:23,718 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=3302f521260753d45664aa61e7d498eb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:23,718 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=3e7d1b1c555592d8a843f5e732993af0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:23,718 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261383718"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261383718"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261383718"}]},"ts":"1689261383718"} 2023-07-13 15:16:23,718 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=6e624d90aee9b9c61f45e33e58a1897d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:23,718 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f30dad9956b9de184a6ee13566d9183f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:23,718 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=1c42d5ddb0343f4760f1a2e0442fb748, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:23,719 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261383718"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261383718"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261383718"}]},"ts":"1689261383718"} 2023-07-13 15:16:23,718 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261383718"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261383718"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261383718"}]},"ts":"1689261383718"} 2023-07-13 15:16:23,718 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261383718"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261383718"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261383718"}]},"ts":"1689261383718"} 2023-07-13 15:16:23,719 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261383718"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261383718"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261383718"}]},"ts":"1689261383718"} 2023-07-13 15:16:23,720 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=133, state=RUNNABLE; OpenRegionProcedure 3302f521260753d45664aa61e7d498eb, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:23,721 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=134, state=RUNNABLE; OpenRegionProcedure f30dad9956b9de184a6ee13566d9183f, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:23,723 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=136, state=RUNNABLE; OpenRegionProcedure 6e624d90aee9b9c61f45e33e58a1897d, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:23,723 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=137, state=RUNNABLE; OpenRegionProcedure 3e7d1b1c555592d8a843f5e732993af0, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:23,724 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=135, state=RUNNABLE; OpenRegionProcedure 1c42d5ddb0343f4760f1a2e0442fb748, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:23,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-13 15:16:23,876 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:23,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6e624d90aee9b9c61f45e33e58a1897d, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 15:16:23,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:23,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:23,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:23,877 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:23,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1c42d5ddb0343f4760f1a2e0442fb748, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 15:16:23,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:23,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:23,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:23,878 INFO [StoreOpener-6e624d90aee9b9c61f45e33e58a1897d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:23,879 INFO [StoreOpener-1c42d5ddb0343f4760f1a2e0442fb748-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:23,880 DEBUG [StoreOpener-6e624d90aee9b9c61f45e33e58a1897d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d/f 2023-07-13 15:16:23,880 DEBUG [StoreOpener-6e624d90aee9b9c61f45e33e58a1897d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d/f 2023-07-13 15:16:23,880 DEBUG [StoreOpener-1c42d5ddb0343f4760f1a2e0442fb748-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748/f 2023-07-13 15:16:23,880 INFO [StoreOpener-6e624d90aee9b9c61f45e33e58a1897d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6e624d90aee9b9c61f45e33e58a1897d columnFamilyName f 2023-07-13 15:16:23,880 DEBUG [StoreOpener-1c42d5ddb0343f4760f1a2e0442fb748-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748/f 2023-07-13 15:16:23,881 INFO [StoreOpener-1c42d5ddb0343f4760f1a2e0442fb748-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1c42d5ddb0343f4760f1a2e0442fb748 columnFamilyName f 2023-07-13 15:16:23,881 INFO [StoreOpener-6e624d90aee9b9c61f45e33e58a1897d-1] regionserver.HStore(310): Store=6e624d90aee9b9c61f45e33e58a1897d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:23,882 INFO [StoreOpener-1c42d5ddb0343f4760f1a2e0442fb748-1] regionserver.HStore(310): Store=1c42d5ddb0343f4760f1a2e0442fb748/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:23,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:23,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:23,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:23,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:23,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:23,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:23,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:23,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:23,888 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6e624d90aee9b9c61f45e33e58a1897d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12069597920, jitterRate=0.12406890094280243}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:23,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6e624d90aee9b9c61f45e33e58a1897d: 2023-07-13 15:16:23,889 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1c42d5ddb0343f4760f1a2e0442fb748; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11386630400, jitterRate=0.0604625940322876}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:23,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1c42d5ddb0343f4760f1a2e0442fb748: 2023-07-13 15:16:23,895 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d., pid=140, masterSystemTime=1689261383872 2023-07-13 15:16:23,895 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748., pid=142, masterSystemTime=1689261383873 2023-07-13 15:16:23,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:23,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:23,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:23,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3e7d1b1c555592d8a843f5e732993af0, NAME => 'Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 15:16:23,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:23,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:23,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:23,897 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=1c42d5ddb0343f4760f1a2e0442fb748, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:23,898 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261383897"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261383897"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261383897"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261383897"}]},"ts":"1689261383897"} 2023-07-13 15:16:23,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:23,898 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:23,898 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:23,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3302f521260753d45664aa61e7d498eb, NAME => 'Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 15:16:23,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:23,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:23,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:23,899 INFO [StoreOpener-3e7d1b1c555592d8a843f5e732993af0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:23,899 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=6e624d90aee9b9c61f45e33e58a1897d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:23,899 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261383899"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261383899"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261383899"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261383899"}]},"ts":"1689261383899"} 2023-07-13 15:16:23,901 INFO [StoreOpener-3302f521260753d45664aa61e7d498eb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:23,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=135 2023-07-13 15:16:23,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=135, state=SUCCESS; OpenRegionProcedure 1c42d5ddb0343f4760f1a2e0442fb748, server=jenkins-hbase4.apache.org,40971,1689261357748 in 175 msec 2023-07-13 15:16:23,903 DEBUG [StoreOpener-3e7d1b1c555592d8a843f5e732993af0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0/f 2023-07-13 15:16:23,903 DEBUG [StoreOpener-3e7d1b1c555592d8a843f5e732993af0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0/f 2023-07-13 15:16:23,903 DEBUG [StoreOpener-3302f521260753d45664aa61e7d498eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb/f 2023-07-13 15:16:23,903 DEBUG [StoreOpener-3302f521260753d45664aa61e7d498eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb/f 2023-07-13 15:16:23,903 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1c42d5ddb0343f4760f1a2e0442fb748, ASSIGN in 341 msec 2023-07-13 15:16:23,903 INFO [StoreOpener-3e7d1b1c555592d8a843f5e732993af0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3e7d1b1c555592d8a843f5e732993af0 columnFamilyName f 2023-07-13 15:16:23,903 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=136 2023-07-13 15:16:23,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=136, state=SUCCESS; OpenRegionProcedure 6e624d90aee9b9c61f45e33e58a1897d, server=jenkins-hbase4.apache.org,44089,1689261357555 in 178 msec 2023-07-13 15:16:23,904 INFO [StoreOpener-3302f521260753d45664aa61e7d498eb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3302f521260753d45664aa61e7d498eb columnFamilyName f 2023-07-13 15:16:23,904 INFO [StoreOpener-3e7d1b1c555592d8a843f5e732993af0-1] regionserver.HStore(310): Store=3e7d1b1c555592d8a843f5e732993af0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:23,904 INFO [StoreOpener-3302f521260753d45664aa61e7d498eb-1] regionserver.HStore(310): Store=3302f521260753d45664aa61e7d498eb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:23,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:23,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:23,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:23,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:23,907 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6e624d90aee9b9c61f45e33e58a1897d, ASSIGN in 343 msec 2023-07-13 15:16:23,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:23,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:23,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:23,912 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3e7d1b1c555592d8a843f5e732993af0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11599485760, jitterRate=0.08028629422187805}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:23,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3e7d1b1c555592d8a843f5e732993af0: 2023-07-13 15:16:23,912 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0., pid=141, masterSystemTime=1689261383873 2023-07-13 15:16:23,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:23,914 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3302f521260753d45664aa61e7d498eb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11963822080, jitterRate=0.11421775817871094}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:23,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3302f521260753d45664aa61e7d498eb: 2023-07-13 15:16:23,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb., pid=138, masterSystemTime=1689261383872 2023-07-13 15:16:23,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:23,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:23,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:23,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f30dad9956b9de184a6ee13566d9183f, NAME => 'Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 15:16:23,915 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=3e7d1b1c555592d8a843f5e732993af0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:23,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:23,915 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261383915"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261383915"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261383915"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261383915"}]},"ts":"1689261383915"} 2023-07-13 15:16:23,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:23,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:23,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:23,916 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:23,917 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testDisabledTableMove' 2023-07-13 15:16:23,917 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-13 15:16:23,919 INFO [StoreOpener-f30dad9956b9de184a6ee13566d9183f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:23,919 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=3302f521260753d45664aa61e7d498eb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:23,919 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261383919"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261383919"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261383919"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261383919"}]},"ts":"1689261383919"} 2023-07-13 15:16:23,921 DEBUG [StoreOpener-f30dad9956b9de184a6ee13566d9183f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f/f 2023-07-13 15:16:23,921 DEBUG [StoreOpener-f30dad9956b9de184a6ee13566d9183f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f/f 2023-07-13 15:16:23,921 INFO [StoreOpener-f30dad9956b9de184a6ee13566d9183f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f30dad9956b9de184a6ee13566d9183f columnFamilyName f 2023-07-13 15:16:23,922 INFO [StoreOpener-f30dad9956b9de184a6ee13566d9183f-1] regionserver.HStore(310): Store=f30dad9956b9de184a6ee13566d9183f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:23,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:23,924 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=137 2023-07-13 15:16:23,924 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; OpenRegionProcedure 3e7d1b1c555592d8a843f5e732993af0, server=jenkins-hbase4.apache.org,40971,1689261357748 in 197 msec 2023-07-13 15:16:23,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:23,925 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=133 2023-07-13 15:16:23,925 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=133, state=SUCCESS; OpenRegionProcedure 3302f521260753d45664aa61e7d498eb, server=jenkins-hbase4.apache.org,44089,1689261357555 in 202 msec 2023-07-13 15:16:23,925 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7d1b1c555592d8a843f5e732993af0, ASSIGN in 363 msec 2023-07-13 15:16:23,926 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3302f521260753d45664aa61e7d498eb, ASSIGN in 364 msec 2023-07-13 15:16:23,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:23,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:23,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f30dad9956b9de184a6ee13566d9183f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10109430240, jitterRate=-0.05848594009876251}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:23,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f30dad9956b9de184a6ee13566d9183f: 2023-07-13 15:16:23,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f., pid=139, masterSystemTime=1689261383873 2023-07-13 15:16:23,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:23,932 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:23,933 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f30dad9956b9de184a6ee13566d9183f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:23,933 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261383933"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261383933"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261383933"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261383933"}]},"ts":"1689261383933"} 2023-07-13 15:16:23,935 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=134 2023-07-13 15:16:23,935 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; OpenRegionProcedure f30dad9956b9de184a6ee13566d9183f, server=jenkins-hbase4.apache.org,40971,1689261357748 in 213 msec 2023-07-13 15:16:23,937 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-13 15:16:23,937 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f30dad9956b9de184a6ee13566d9183f, ASSIGN in 374 msec 2023-07-13 15:16:23,938 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:23,938 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261383938"}]},"ts":"1689261383938"} 2023-07-13 15:16:23,939 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-13 15:16:23,943 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:23,944 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 475 msec 2023-07-13 15:16:24,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-13 15:16:24,075 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-13 15:16:24,075 DEBUG [Listener at localhost/37749] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-13 15:16:24,075 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:24,079 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-13 15:16:24,079 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:24,079 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-13 15:16:24,080 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:24,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-13 15:16:24,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:24,086 INFO [Listener at localhost/37749] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-13 15:16:24,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-13 15:16:24,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-13 15:16:24,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-13 15:16:24,090 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261384090"}]},"ts":"1689261384090"} 2023-07-13 15:16:24,091 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-13 15:16:24,093 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-13 15:16:24,094 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3302f521260753d45664aa61e7d498eb, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f30dad9956b9de184a6ee13566d9183f, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1c42d5ddb0343f4760f1a2e0442fb748, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6e624d90aee9b9c61f45e33e58a1897d, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7d1b1c555592d8a843f5e732993af0, UNASSIGN}] 2023-07-13 15:16:24,095 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f30dad9956b9de184a6ee13566d9183f, UNASSIGN 2023-07-13 15:16:24,095 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1c42d5ddb0343f4760f1a2e0442fb748, UNASSIGN 2023-07-13 15:16:24,095 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3302f521260753d45664aa61e7d498eb, UNASSIGN 2023-07-13 15:16:24,096 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6e624d90aee9b9c61f45e33e58a1897d, UNASSIGN 2023-07-13 15:16:24,096 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7d1b1c555592d8a843f5e732993af0, UNASSIGN 2023-07-13 15:16:24,096 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f30dad9956b9de184a6ee13566d9183f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:24,096 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=1c42d5ddb0343f4760f1a2e0442fb748, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:24,096 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261384096"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261384096"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261384096"}]},"ts":"1689261384096"} 2023-07-13 15:16:24,096 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=3302f521260753d45664aa61e7d498eb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:24,096 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261384096"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261384096"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261384096"}]},"ts":"1689261384096"} 2023-07-13 15:16:24,097 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261384096"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261384096"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261384096"}]},"ts":"1689261384096"} 2023-07-13 15:16:24,097 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=6e624d90aee9b9c61f45e33e58a1897d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:24,097 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=3e7d1b1c555592d8a843f5e732993af0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:24,097 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261384097"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261384097"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261384097"}]},"ts":"1689261384097"} 2023-07-13 15:16:24,097 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261384097"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261384097"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261384097"}]},"ts":"1689261384097"} 2023-07-13 15:16:24,098 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=145, state=RUNNABLE; CloseRegionProcedure f30dad9956b9de184a6ee13566d9183f, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:24,098 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=146, state=RUNNABLE; CloseRegionProcedure 1c42d5ddb0343f4760f1a2e0442fb748, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:24,099 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=144, state=RUNNABLE; CloseRegionProcedure 3302f521260753d45664aa61e7d498eb, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:24,100 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=147, state=RUNNABLE; CloseRegionProcedure 6e624d90aee9b9c61f45e33e58a1897d, server=jenkins-hbase4.apache.org,44089,1689261357555}] 2023-07-13 15:16:24,100 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=148, state=RUNNABLE; CloseRegionProcedure 3e7d1b1c555592d8a843f5e732993af0, server=jenkins-hbase4.apache.org,40971,1689261357748}] 2023-07-13 15:16:24,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-13 15:16:24,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:24,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3e7d1b1c555592d8a843f5e732993af0, disabling compactions & flushes 2023-07-13 15:16:24,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:24,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:24,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. after waiting 0 ms 2023-07-13 15:16:24,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:24,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:24,252 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3302f521260753d45664aa61e7d498eb, disabling compactions & flushes 2023-07-13 15:16:24,252 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:24,252 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:24,252 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. after waiting 0 ms 2023-07-13 15:16:24,252 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:24,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:24,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:24,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb. 2023-07-13 15:16:24,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3302f521260753d45664aa61e7d498eb: 2023-07-13 15:16:24,257 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0. 2023-07-13 15:16:24,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3e7d1b1c555592d8a843f5e732993af0: 2023-07-13 15:16:24,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:24,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:24,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6e624d90aee9b9c61f45e33e58a1897d, disabling compactions & flushes 2023-07-13 15:16:24,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:24,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:24,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. after waiting 0 ms 2023-07-13 15:16:24,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:24,259 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=3302f521260753d45664aa61e7d498eb, regionState=CLOSED 2023-07-13 15:16:24,259 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261384259"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261384259"}]},"ts":"1689261384259"} 2023-07-13 15:16:24,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:24,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:24,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f30dad9956b9de184a6ee13566d9183f, disabling compactions & flushes 2023-07-13 15:16:24,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:24,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:24,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. after waiting 0 ms 2023-07-13 15:16:24,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:24,262 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=3e7d1b1c555592d8a843f5e732993af0, regionState=CLOSED 2023-07-13 15:16:24,262 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261384262"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261384262"}]},"ts":"1689261384262"} 2023-07-13 15:16:24,264 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=144 2023-07-13 15:16:24,264 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=144, state=SUCCESS; CloseRegionProcedure 3302f521260753d45664aa61e7d498eb, server=jenkins-hbase4.apache.org,44089,1689261357555 in 163 msec 2023-07-13 15:16:24,265 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=148 2023-07-13 15:16:24,265 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3302f521260753d45664aa61e7d498eb, UNASSIGN in 170 msec 2023-07-13 15:16:24,265 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=148, state=SUCCESS; CloseRegionProcedure 3e7d1b1c555592d8a843f5e732993af0, server=jenkins-hbase4.apache.org,40971,1689261357748 in 163 msec 2023-07-13 15:16:24,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:24,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:24,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d. 2023-07-13 15:16:24,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6e624d90aee9b9c61f45e33e58a1897d: 2023-07-13 15:16:24,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f. 2023-07-13 15:16:24,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f30dad9956b9de184a6ee13566d9183f: 2023-07-13 15:16:24,267 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7d1b1c555592d8a843f5e732993af0, UNASSIGN in 171 msec 2023-07-13 15:16:24,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:24,268 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=6e624d90aee9b9c61f45e33e58a1897d, regionState=CLOSED 2023-07-13 15:16:24,268 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261384268"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261384268"}]},"ts":"1689261384268"} 2023-07-13 15:16:24,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:24,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:24,269 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f30dad9956b9de184a6ee13566d9183f, regionState=CLOSED 2023-07-13 15:16:24,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1c42d5ddb0343f4760f1a2e0442fb748, disabling compactions & flushes 2023-07-13 15:16:24,270 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:24,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:24,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. after waiting 0 ms 2023-07-13 15:16:24,270 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261384269"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261384269"}]},"ts":"1689261384269"} 2023-07-13 15:16:24,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:24,272 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=147 2023-07-13 15:16:24,273 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=147, state=SUCCESS; CloseRegionProcedure 6e624d90aee9b9c61f45e33e58a1897d, server=jenkins-hbase4.apache.org,44089,1689261357555 in 170 msec 2023-07-13 15:16:24,273 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=145 2023-07-13 15:16:24,273 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=145, state=SUCCESS; CloseRegionProcedure f30dad9956b9de184a6ee13566d9183f, server=jenkins-hbase4.apache.org,40971,1689261357748 in 174 msec 2023-07-13 15:16:24,274 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6e624d90aee9b9c61f45e33e58a1897d, UNASSIGN in 179 msec 2023-07-13 15:16:24,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:24,275 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f30dad9956b9de184a6ee13566d9183f, UNASSIGN in 179 msec 2023-07-13 15:16:24,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748. 2023-07-13 15:16:24,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1c42d5ddb0343f4760f1a2e0442fb748: 2023-07-13 15:16:24,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:24,277 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=1c42d5ddb0343f4760f1a2e0442fb748, regionState=CLOSED 2023-07-13 15:16:24,277 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261384277"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261384277"}]},"ts":"1689261384277"} 2023-07-13 15:16:24,279 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=146 2023-07-13 15:16:24,280 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; CloseRegionProcedure 1c42d5ddb0343f4760f1a2e0442fb748, server=jenkins-hbase4.apache.org,40971,1689261357748 in 180 msec 2023-07-13 15:16:24,281 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=143 2023-07-13 15:16:24,281 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1c42d5ddb0343f4760f1a2e0442fb748, UNASSIGN in 185 msec 2023-07-13 15:16:24,281 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261384281"}]},"ts":"1689261384281"} 2023-07-13 15:16:24,282 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-13 15:16:24,285 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-13 15:16:24,286 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 198 msec 2023-07-13 15:16:24,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-13 15:16:24,392 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-13 15:16:24,392 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1332351599 2023-07-13 15:16:24,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1332351599 2023-07-13 15:16:24,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:24,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1332351599 2023-07-13 15:16:24,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:24,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:24,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-13 15:16:24,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1332351599, current retry=0 2023-07-13 15:16:24,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1332351599. 2023-07-13 15:16:24,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:24,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:24,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:24,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-13 15:16:24,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:24,405 INFO [Listener at localhost/37749] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-13 15:16:24,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-13 15:16:24,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:24,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 87 connection: 172.31.14.131:50614 deadline: 1689261444405, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-13 15:16:24,406 DEBUG [Listener at localhost/37749] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-13 15:16:24,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-13 15:16:24,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 15:16:24,409 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 15:16:24,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1332351599' 2023-07-13 15:16:24,409 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 15:16:24,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:24,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1332351599 2023-07-13 15:16:24,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:24,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:24,416 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:24,416 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:24,416 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:24,416 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:24,416 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:24,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-13 15:16:24,419 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d/recovered.edits] 2023-07-13 15:16:24,419 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb/recovered.edits] 2023-07-13 15:16:24,419 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748/recovered.edits] 2023-07-13 15:16:24,419 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f/recovered.edits] 2023-07-13 15:16:24,419 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0/f, FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0/recovered.edits] 2023-07-13 15:16:24,427 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f/recovered.edits/4.seqid 2023-07-13 15:16:24,428 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0/recovered.edits/4.seqid 2023-07-13 15:16:24,428 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/f30dad9956b9de184a6ee13566d9183f 2023-07-13 15:16:24,429 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d/recovered.edits/4.seqid 2023-07-13 15:16:24,429 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb/recovered.edits/4.seqid 2023-07-13 15:16:24,429 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748/recovered.edits/4.seqid to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/archive/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748/recovered.edits/4.seqid 2023-07-13 15:16:24,429 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3e7d1b1c555592d8a843f5e732993af0 2023-07-13 15:16:24,429 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/6e624d90aee9b9c61f45e33e58a1897d 2023-07-13 15:16:24,430 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/3302f521260753d45664aa61e7d498eb 2023-07-13 15:16:24,430 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/.tmp/data/default/Group_testDisabledTableMove/1c42d5ddb0343f4760f1a2e0442fb748 2023-07-13 15:16:24,430 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-13 15:16:24,432 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 15:16:24,434 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-13 15:16:24,439 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-13 15:16:24,440 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 15:16:24,440 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-13 15:16:24,440 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261384440"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:24,440 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261384440"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:24,440 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261384440"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:24,440 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261384440"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:24,440 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261384440"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:24,442 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-13 15:16:24,442 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3302f521260753d45664aa61e7d498eb, NAME => 'Group_testDisabledTableMove,,1689261383468.3302f521260753d45664aa61e7d498eb.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f30dad9956b9de184a6ee13566d9183f, NAME => 'Group_testDisabledTableMove,aaaaa,1689261383468.f30dad9956b9de184a6ee13566d9183f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 1c42d5ddb0343f4760f1a2e0442fb748, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689261383468.1c42d5ddb0343f4760f1a2e0442fb748.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 6e624d90aee9b9c61f45e33e58a1897d, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689261383468.6e624d90aee9b9c61f45e33e58a1897d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 3e7d1b1c555592d8a843f5e732993af0, NAME => 'Group_testDisabledTableMove,zzzzz,1689261383468.3e7d1b1c555592d8a843f5e732993af0.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-13 15:16:24,442 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-13 15:16:24,442 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261384442"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:24,443 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-13 15:16:24,445 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 15:16:24,446 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 39 msec 2023-07-13 15:16:24,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-13 15:16:24,518 INFO [Listener at localhost/37749] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-13 15:16:24,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:24,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:24,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:24,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:24,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:24,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377] to rsgroup default 2023-07-13 15:16:24,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:24,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1332351599 2023-07-13 15:16:24,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:24,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:24,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1332351599, current retry=0 2023-07-13 15:16:24,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,32995,1689261357367, jenkins-hbase4.apache.org,34377,1689261361353] are moved back to Group_testDisabledTableMove_1332351599 2023-07-13 15:16:24,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1332351599 => default 2023-07-13 15:16:24,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:24,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1332351599 2023-07-13 15:16:24,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:24,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:24,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:24,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:24,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:24,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:24,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:24,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:24,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:24,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:24,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:24,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:24,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:24,548 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:24,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:24,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:24,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:24,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:24,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:24,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:24,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:24,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:24,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:24,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262584559, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:24,560 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:24,562 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:24,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:24,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:24,563 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:24,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:24,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:24,581 INFO [Listener at localhost/37749] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=516 (was 515) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-579889536_17 at /127.0.0.1:42724 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1471309719_17 at /127.0.0.1:32824 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x497c82a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=800 (was 780) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=459 (was 459), ProcessCount=172 (was 172), AvailableMemoryMB=4117 (was 4165) 2023-07-13 15:16:24,581 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-13 15:16:24,598 INFO [Listener at localhost/37749] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=516, OpenFileDescriptor=800, MaxFileDescriptor=60000, SystemLoadAverage=459, ProcessCount=172, AvailableMemoryMB=4116 2023-07-13 15:16:24,598 WARN [Listener at localhost/37749] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-13 15:16:24,598 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-13 15:16:24,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:24,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:24,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:24,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:24,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:24,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:24,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:24,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:24,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:24,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:24,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:24,613 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:24,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:24,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:24,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:24,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:24,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:24,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:24,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:24,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33053] to rsgroup master 2023-07-13 15:16:24,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:24,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50614 deadline: 1689262584625, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. 2023-07-13 15:16:24,626 WARN [Listener at localhost/37749] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33053 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:24,628 INFO [Listener at localhost/37749] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:24,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:24,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:24,629 INFO [Listener at localhost/37749] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32995, jenkins-hbase4.apache.org:34377, jenkins-hbase4.apache.org:40971, jenkins-hbase4.apache.org:44089], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:24,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:24,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33053] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:24,630 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 15:16:24,630 INFO [Listener at localhost/37749] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 15:16:24,630 DEBUG [Listener at localhost/37749] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3e4d79c0 to 127.0.0.1:52275 2023-07-13 15:16:24,630 DEBUG [Listener at localhost/37749] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,633 DEBUG [Listener at localhost/37749] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 15:16:24,633 DEBUG [Listener at localhost/37749] util.JVMClusterUtil(257): Found active master hash=2138112422, stopped=false 2023-07-13 15:16:24,633 DEBUG [Listener at localhost/37749] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:24,634 DEBUG [Listener at localhost/37749] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:24,634 INFO [Listener at localhost/37749] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:16:24,635 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:24,635 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:24,635 INFO [Listener at localhost/37749] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 15:16:24,635 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:24,635 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:24,635 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:24,635 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:24,636 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:24,636 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:24,636 DEBUG [Listener at localhost/37749] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2ba7a330 to 127.0.0.1:52275 2023-07-13 15:16:24,636 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:24,636 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:24,636 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:24,636 DEBUG [Listener at localhost/37749] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,637 INFO [Listener at localhost/37749] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32995,1689261357367' ***** 2023-07-13 15:16:24,637 INFO [Listener at localhost/37749] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:24,637 INFO [Listener at localhost/37749] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44089,1689261357555' ***** 2023-07-13 15:16:24,637 INFO [Listener at localhost/37749] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:24,637 INFO [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:24,637 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:24,637 INFO [Listener at localhost/37749] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40971,1689261357748' ***** 2023-07-13 15:16:24,638 INFO [Listener at localhost/37749] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:24,638 INFO [Listener at localhost/37749] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34377,1689261361353' ***** 2023-07-13 15:16:24,638 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:24,639 INFO [Listener at localhost/37749] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:24,647 INFO [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:24,656 INFO [RS:3;jenkins-hbase4:34377] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@45ea4e7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:24,656 INFO [RS:2;jenkins-hbase4:40971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@36c7be16{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:24,656 INFO [RS:0;jenkins-hbase4:32995] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2a1b55bd{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:24,656 INFO [RS:1;jenkins-hbase4:44089] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4d520d27{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:24,660 INFO [RS:3;jenkins-hbase4:34377] server.AbstractConnector(383): Stopped ServerConnector@53265acb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:24,660 INFO [RS:0;jenkins-hbase4:32995] server.AbstractConnector(383): Stopped ServerConnector@4cab7999{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:24,660 INFO [RS:3;jenkins-hbase4:34377] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:24,660 INFO [RS:2;jenkins-hbase4:40971] server.AbstractConnector(383): Stopped ServerConnector@58b8c90a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:24,660 INFO [RS:1;jenkins-hbase4:44089] server.AbstractConnector(383): Stopped ServerConnector@72b0dcfa{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:24,660 INFO [RS:3;jenkins-hbase4:34377] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b0143bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:24,660 INFO [RS:1;jenkins-hbase4:44089] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:24,660 INFO [RS:2;jenkins-hbase4:40971] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:24,660 INFO [RS:0;jenkins-hbase4:32995] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:24,662 INFO [RS:3;jenkins-hbase4:34377] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@593950e0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:24,662 INFO [RS:1;jenkins-hbase4:44089] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ec386b4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:24,663 INFO [RS:0;jenkins-hbase4:32995] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a6a072{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:24,664 INFO [RS:1;jenkins-hbase4:44089] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2afce463{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:24,663 INFO [RS:2;jenkins-hbase4:40971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5526bfb1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:24,664 INFO [RS:0;jenkins-hbase4:32995] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@249e2011{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:24,665 INFO [RS:2;jenkins-hbase4:40971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@477c886b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:24,667 INFO [RS:0;jenkins-hbase4:32995] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:24,668 INFO [RS:0;jenkins-hbase4:32995] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:24,668 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:24,668 INFO [RS:0;jenkins-hbase4:32995] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:24,668 INFO [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:24,668 DEBUG [RS:0;jenkins-hbase4:32995] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x218e60a3 to 127.0.0.1:52275 2023-07-13 15:16:24,668 DEBUG [RS:0;jenkins-hbase4:32995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,668 INFO [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32995,1689261357367; all regions closed. 2023-07-13 15:16:24,670 INFO [RS:1;jenkins-hbase4:44089] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:24,670 INFO [RS:2;jenkins-hbase4:40971] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:24,670 INFO [RS:1;jenkins-hbase4:44089] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:24,670 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:24,670 INFO [RS:3;jenkins-hbase4:34377] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:24,670 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:24,670 INFO [RS:3;jenkins-hbase4:34377] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:24,670 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:24,670 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:24,670 INFO [RS:3;jenkins-hbase4:34377] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:24,670 INFO [RS:1;jenkins-hbase4:44089] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:24,670 INFO [RS:2;jenkins-hbase4:40971] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:24,670 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:24,671 INFO [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:24,671 INFO [RS:2;jenkins-hbase4:40971] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:24,671 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(3305): Received CLOSE for 1c39d35808badfb6a5d66d7a6a08f142 2023-07-13 15:16:24,671 DEBUG [RS:3;jenkins-hbase4:34377] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x26d6c895 to 127.0.0.1:52275 2023-07-13 15:16:24,671 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(3305): Received CLOSE for baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:24,671 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:24,671 DEBUG [RS:3;jenkins-hbase4:34377] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,672 INFO [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34377,1689261361353; all regions closed. 2023-07-13 15:16:24,673 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:24,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing baa43973e5bda9f8cd7ce215ea0de4f7, disabling compactions & flushes 2023-07-13 15:16:24,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:24,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:24,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. after waiting 0 ms 2023-07-13 15:16:24,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:24,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1c39d35808badfb6a5d66d7a6a08f142, disabling compactions & flushes 2023-07-13 15:16:24,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:24,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:24,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. after waiting 0 ms 2023-07-13 15:16:24,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:24,672 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(3305): Received CLOSE for 0d8e2cb2b78a281359de79ba388b0059 2023-07-13 15:16:24,672 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:24,674 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(3305): Received CLOSE for 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:24,674 DEBUG [RS:2;jenkins-hbase4:40971] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0cda3187 to 127.0.0.1:52275 2023-07-13 15:16:24,674 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:24,674 DEBUG [RS:1;jenkins-hbase4:44089] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5af43e40 to 127.0.0.1:52275 2023-07-13 15:16:24,674 DEBUG [RS:1;jenkins-hbase4:44089] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,674 INFO [RS:1;jenkins-hbase4:44089] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:24,674 INFO [RS:1;jenkins-hbase4:44089] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:24,674 INFO [RS:1;jenkins-hbase4:44089] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:24,674 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 15:16:24,674 DEBUG [RS:2;jenkins-hbase4:40971] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,675 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 15:16:24,675 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1478): Online Regions={baa43973e5bda9f8cd7ce215ea0de4f7=testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7.} 2023-07-13 15:16:24,676 DEBUG [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1504): Waiting on baa43973e5bda9f8cd7ce215ea0de4f7 2023-07-13 15:16:24,678 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-13 15:16:24,678 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1478): Online Regions={1c39d35808badfb6a5d66d7a6a08f142=hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142., 0d8e2cb2b78a281359de79ba388b0059=unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059., 1588230740=hbase:meta,,1.1588230740, 24214add90ee9cbdd631baadba96052d=hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d.} 2023-07-13 15:16:24,678 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:24,678 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:24,679 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:24,678 DEBUG [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1504): Waiting on 0d8e2cb2b78a281359de79ba388b0059, 1588230740, 1c39d35808badfb6a5d66d7a6a08f142, 24214add90ee9cbdd631baadba96052d 2023-07-13 15:16:24,679 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:24,679 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:24,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.48 KB heapSize=61.13 KB 2023-07-13 15:16:24,681 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-13 15:16:24,681 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-13 15:16:24,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/testRename/baa43973e5bda9f8cd7ce215ea0de4f7/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 15:16:24,683 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:24,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for baa43973e5bda9f8cd7ce215ea0de4f7: 2023-07-13 15:16:24,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689261377822.baa43973e5bda9f8cd7ce215ea0de4f7. 2023-07-13 15:16:24,693 DEBUG [RS:0;jenkins-hbase4:32995] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs 2023-07-13 15:16:24,693 INFO [RS:0;jenkins-hbase4:32995] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32995%2C1689261357367:(num 1689261359920) 2023-07-13 15:16:24,693 DEBUG [RS:0;jenkins-hbase4:32995] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,693 INFO [RS:0;jenkins-hbase4:32995] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:24,694 INFO [RS:0;jenkins-hbase4:32995] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:24,694 INFO [RS:0;jenkins-hbase4:32995] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:24,695 INFO [RS:0;jenkins-hbase4:32995] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:24,695 INFO [RS:0;jenkins-hbase4:32995] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:24,698 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:24,699 INFO [RS:0;jenkins-hbase4:32995] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32995 2023-07-13 15:16:24,704 DEBUG [RS:3;jenkins-hbase4:34377] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs 2023-07-13 15:16:24,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/namespace/1c39d35808badfb6a5d66d7a6a08f142/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-13 15:16:24,704 INFO [RS:3;jenkins-hbase4:34377] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34377%2C1689261361353:(num 1689261361819) 2023-07-13 15:16:24,704 DEBUG [RS:3;jenkins-hbase4:34377] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,704 INFO [RS:3;jenkins-hbase4:34377] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:24,705 INFO [RS:3;jenkins-hbase4:34377] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:24,707 INFO [RS:3;jenkins-hbase4:34377] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:24,707 INFO [RS:3;jenkins-hbase4:34377] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:24,707 INFO [RS:3;jenkins-hbase4:34377] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:24,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:24,707 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:24,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1c39d35808badfb6a5d66d7a6a08f142: 2023-07-13 15:16:24,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689261360323.1c39d35808badfb6a5d66d7a6a08f142. 2023-07-13 15:16:24,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0d8e2cb2b78a281359de79ba388b0059, disabling compactions & flushes 2023-07-13 15:16:24,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:24,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:24,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. after waiting 0 ms 2023-07-13 15:16:24,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:24,714 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-13 15:16:24,714 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-13 15:16:24,714 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-13 15:16:24,714 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-13 15:16:24,715 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:24,715 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:24,715 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:24,715 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:24,720 INFO [RS:3;jenkins-hbase4:34377] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34377 2023-07-13 15:16:24,720 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:24,720 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:24,720 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32995,1689261357367 2023-07-13 15:16:24,721 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:24,721 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:24,723 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:24,723 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:24,723 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34377,1689261361353 2023-07-13 15:16:24,723 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32995,1689261357367] 2023-07-13 15:16:24,723 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32995,1689261357367; numProcessing=1 2023-07-13 15:16:24,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/default/unmovedTable/0d8e2cb2b78a281359de79ba388b0059/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 15:16:24,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:24,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0d8e2cb2b78a281359de79ba388b0059: 2023-07-13 15:16:24,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689261379484.0d8e2cb2b78a281359de79ba388b0059. 2023-07-13 15:16:24,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 24214add90ee9cbdd631baadba96052d, disabling compactions & flushes 2023-07-13 15:16:24,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:24,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:24,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. after waiting 0 ms 2023-07-13 15:16:24,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:24,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 24214add90ee9cbdd631baadba96052d 1/1 column families, dataSize=27.07 KB heapSize=44.66 KB 2023-07-13 15:16:24,735 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.56 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/info/f6d4d917b65e46e69179cf321df12b60 2023-07-13 15:16:24,743 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f6d4d917b65e46e69179cf321df12b60 2023-07-13 15:16:24,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.07 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/.tmp/m/30b7506e2cca4e45ba8d3af5aa8e53fd 2023-07-13 15:16:24,766 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/rep_barrier/07784a97bbc9479094d6c56cd728cb8b 2023-07-13 15:16:24,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 30b7506e2cca4e45ba8d3af5aa8e53fd 2023-07-13 15:16:24,771 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/.tmp/m/30b7506e2cca4e45ba8d3af5aa8e53fd as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/m/30b7506e2cca4e45ba8d3af5aa8e53fd 2023-07-13 15:16:24,774 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 07784a97bbc9479094d6c56cd728cb8b 2023-07-13 15:16:24,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 30b7506e2cca4e45ba8d3af5aa8e53fd 2023-07-13 15:16:24,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/m/30b7506e2cca4e45ba8d3af5aa8e53fd, entries=28, sequenceid=101, filesize=6.1 K 2023-07-13 15:16:24,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.07 KB/27722, heapSize ~44.64 KB/45712, currentSize=0 B/0 for 24214add90ee9cbdd631baadba96052d in 51ms, sequenceid=101, compaction requested=false 2023-07-13 15:16:24,798 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/table/9289e55ff6c04632aa8c4afbe17a9399 2023-07-13 15:16:24,803 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/rsgroup/24214add90ee9cbdd631baadba96052d/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-13 15:16:24,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:24,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:24,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 24214add90ee9cbdd631baadba96052d: 2023-07-13 15:16:24,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689261360543.24214add90ee9cbdd631baadba96052d. 2023-07-13 15:16:24,806 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9289e55ff6c04632aa8c4afbe17a9399 2023-07-13 15:16:24,807 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/info/f6d4d917b65e46e69179cf321df12b60 as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info/f6d4d917b65e46e69179cf321df12b60 2023-07-13 15:16:24,813 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f6d4d917b65e46e69179cf321df12b60 2023-07-13 15:16:24,814 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/info/f6d4d917b65e46e69179cf321df12b60, entries=62, sequenceid=210, filesize=11.9 K 2023-07-13 15:16:24,814 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/rep_barrier/07784a97bbc9479094d6c56cd728cb8b as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier/07784a97bbc9479094d6c56cd728cb8b 2023-07-13 15:16:24,820 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 07784a97bbc9479094d6c56cd728cb8b 2023-07-13 15:16:24,821 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/rep_barrier/07784a97bbc9479094d6c56cd728cb8b, entries=8, sequenceid=210, filesize=5.8 K 2023-07-13 15:16:24,822 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/.tmp/table/9289e55ff6c04632aa8c4afbe17a9399 as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table/9289e55ff6c04632aa8c4afbe17a9399 2023-07-13 15:16:24,823 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:24,823 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:32995-0x1015f41312f0001, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:24,824 INFO [RS:0;jenkins-hbase4:32995] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32995,1689261357367; zookeeper connection closed. 2023-07-13 15:16:24,824 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@e81147] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@e81147 2023-07-13 15:16:24,826 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32995,1689261357367 already deleted, retry=false 2023-07-13 15:16:24,826 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32995,1689261357367 expired; onlineServers=3 2023-07-13 15:16:24,826 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34377,1689261361353] 2023-07-13 15:16:24,826 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34377,1689261361353; numProcessing=2 2023-07-13 15:16:24,828 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34377,1689261361353 already deleted, retry=false 2023-07-13 15:16:24,828 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34377,1689261361353 expired; onlineServers=2 2023-07-13 15:16:24,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9289e55ff6c04632aa8c4afbe17a9399 2023-07-13 15:16:24,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/table/9289e55ff6c04632aa8c4afbe17a9399, entries=16, sequenceid=210, filesize=6.0 K 2023-07-13 15:16:24,833 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.48 KB/38382, heapSize ~61.08 KB/62544, currentSize=0 B/0 for 1588230740 in 154ms, sequenceid=210, compaction requested=false 2023-07-13 15:16:24,833 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 15:16:24,847 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=98 2023-07-13 15:16:24,847 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:24,850 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:24,850 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:24,850 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:24,876 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40971,1689261357748; all regions closed. 2023-07-13 15:16:24,879 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44089,1689261357555; all regions closed. 2023-07-13 15:16:24,884 DEBUG [RS:2;jenkins-hbase4:40971] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs 2023-07-13 15:16:24,884 INFO [RS:2;jenkins-hbase4:40971] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40971%2C1689261357748.meta:.meta(num 1689261360045) 2023-07-13 15:16:24,886 DEBUG [RS:1;jenkins-hbase4:44089] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs 2023-07-13 15:16:24,887 INFO [RS:1;jenkins-hbase4:44089] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44089%2C1689261357555.meta:.meta(num 1689261369171) 2023-07-13 15:16:24,896 DEBUG [RS:2;jenkins-hbase4:40971] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs 2023-07-13 15:16:24,896 INFO [RS:2;jenkins-hbase4:40971] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40971%2C1689261357748:(num 1689261359920) 2023-07-13 15:16:24,896 DEBUG [RS:2;jenkins-hbase4:40971] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,896 INFO [RS:2;jenkins-hbase4:40971] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:24,896 INFO [RS:2;jenkins-hbase4:40971] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:24,896 INFO [RS:2;jenkins-hbase4:40971] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:24,896 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:24,896 INFO [RS:2;jenkins-hbase4:40971] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:24,897 INFO [RS:2;jenkins-hbase4:40971] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:24,898 INFO [RS:2;jenkins-hbase4:40971] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40971 2023-07-13 15:16:24,898 DEBUG [RS:1;jenkins-hbase4:44089] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/oldWALs 2023-07-13 15:16:24,898 INFO [RS:1;jenkins-hbase4:44089] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44089%2C1689261357555:(num 1689261359920) 2023-07-13 15:16:24,898 DEBUG [RS:1;jenkins-hbase4:44089] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,898 INFO [RS:1;jenkins-hbase4:44089] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:24,898 INFO [RS:1;jenkins-hbase4:44089] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:24,899 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:24,900 INFO [RS:1;jenkins-hbase4:44089] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44089 2023-07-13 15:16:24,900 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:24,900 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40971,1689261357748 2023-07-13 15:16:24,900 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:24,901 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40971,1689261357748] 2023-07-13 15:16:24,901 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40971,1689261357748; numProcessing=3 2023-07-13 15:16:24,903 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40971,1689261357748 already deleted, retry=false 2023-07-13 15:16:24,903 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40971,1689261357748 expired; onlineServers=1 2023-07-13 15:16:24,904 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:24,904 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44089,1689261357555 2023-07-13 15:16:24,904 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:24,905 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44089,1689261357555] 2023-07-13 15:16:24,905 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44089,1689261357555; numProcessing=4 2023-07-13 15:16:24,907 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44089,1689261357555 already deleted, retry=false 2023-07-13 15:16:24,908 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44089,1689261357555 expired; onlineServers=0 2023-07-13 15:16:24,908 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33053,1689261355495' ***** 2023-07-13 15:16:24,908 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 15:16:24,908 DEBUG [M:0;jenkins-hbase4:33053] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e4c0b9d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:24,909 INFO [M:0;jenkins-hbase4:33053] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:24,911 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:24,911 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:24,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:24,911 INFO [M:0;jenkins-hbase4:33053] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@45159224{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:24,912 INFO [M:0;jenkins-hbase4:33053] server.AbstractConnector(383): Stopped ServerConnector@2fa4614f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:24,912 INFO [M:0;jenkins-hbase4:33053] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:24,912 INFO [M:0;jenkins-hbase4:33053] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@72c0c148{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:24,913 INFO [M:0;jenkins-hbase4:33053] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c3edb02{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:24,913 INFO [M:0;jenkins-hbase4:33053] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33053,1689261355495 2023-07-13 15:16:24,913 INFO [M:0;jenkins-hbase4:33053] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33053,1689261355495; all regions closed. 2023-07-13 15:16:24,913 DEBUG [M:0;jenkins-hbase4:33053] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:24,913 INFO [M:0;jenkins-hbase4:33053] master.HMaster(1491): Stopping master jetty server 2023-07-13 15:16:24,914 INFO [M:0;jenkins-hbase4:33053] server.AbstractConnector(383): Stopped ServerConnector@5e1629a3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:24,915 DEBUG [M:0;jenkins-hbase4:33053] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 15:16:24,915 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 15:16:24,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261359402] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261359402,5,FailOnTimeoutGroup] 2023-07-13 15:16:24,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261359402] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261359402,5,FailOnTimeoutGroup] 2023-07-13 15:16:24,915 DEBUG [M:0;jenkins-hbase4:33053] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 15:16:24,915 INFO [M:0;jenkins-hbase4:33053] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 15:16:24,915 INFO [M:0;jenkins-hbase4:33053] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 15:16:24,915 INFO [M:0;jenkins-hbase4:33053] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-13 15:16:24,915 DEBUG [M:0;jenkins-hbase4:33053] master.HMaster(1512): Stopping service threads 2023-07-13 15:16:24,915 INFO [M:0;jenkins-hbase4:33053] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 15:16:24,916 ERROR [M:0;jenkins-hbase4:33053] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-13 15:16:24,916 INFO [M:0;jenkins-hbase4:33053] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 15:16:24,917 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 15:16:24,917 DEBUG [M:0;jenkins-hbase4:33053] zookeeper.ZKUtil(398): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 15:16:24,917 WARN [M:0;jenkins-hbase4:33053] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 15:16:24,917 INFO [M:0;jenkins-hbase4:33053] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 15:16:24,917 INFO [M:0;jenkins-hbase4:33053] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 15:16:24,918 DEBUG [M:0;jenkins-hbase4:33053] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:16:24,918 INFO [M:0;jenkins-hbase4:33053] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:24,918 DEBUG [M:0;jenkins-hbase4:33053] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:24,918 DEBUG [M:0;jenkins-hbase4:33053] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:16:24,918 DEBUG [M:0;jenkins-hbase4:33053] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:24,918 INFO [M:0;jenkins-hbase4:33053] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.01 KB heapSize=621.09 KB 2023-07-13 15:16:24,936 INFO [M:0;jenkins-hbase4:33053] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.01 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/472fd07a88484b9fb51b9e2a3402a034 2023-07-13 15:16:24,943 DEBUG [M:0;jenkins-hbase4:33053] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/472fd07a88484b9fb51b9e2a3402a034 as hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/472fd07a88484b9fb51b9e2a3402a034 2023-07-13 15:16:24,949 INFO [M:0;jenkins-hbase4:33053] regionserver.HStore(1080): Added hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/472fd07a88484b9fb51b9e2a3402a034, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-13 15:16:24,950 INFO [M:0;jenkins-hbase4:33053] regionserver.HRegion(2948): Finished flush of dataSize ~519.01 KB/531471, heapSize ~621.08 KB/635984, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=1152, compaction requested=false 2023-07-13 15:16:24,952 INFO [M:0;jenkins-hbase4:33053] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:24,952 DEBUG [M:0;jenkins-hbase4:33053] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:24,959 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:24,959 INFO [M:0;jenkins-hbase4:33053] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 15:16:24,960 INFO [M:0;jenkins-hbase4:33053] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33053 2023-07-13 15:16:24,962 DEBUG [M:0;jenkins-hbase4:33053] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33053,1689261355495 already deleted, retry=false 2023-07-13 15:16:25,064 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:25,064 INFO [M:0;jenkins-hbase4:33053] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33053,1689261355495; zookeeper connection closed. 2023-07-13 15:16:25,064 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): master:33053-0x1015f41312f0000, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:25,334 INFO [RS:1;jenkins-hbase4:44089] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44089,1689261357555; zookeeper connection closed. 2023-07-13 15:16:25,334 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:25,334 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:44089-0x1015f41312f0002, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:25,334 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5e5a3d4f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5e5a3d4f 2023-07-13 15:16:25,434 INFO [RS:2;jenkins-hbase4:40971] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40971,1689261357748; zookeeper connection closed. 2023-07-13 15:16:25,434 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:25,434 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:40971-0x1015f41312f0003, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:25,435 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@21dbdfeb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@21dbdfeb 2023-07-13 15:16:25,535 INFO [RS:3;jenkins-hbase4:34377] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34377,1689261361353; zookeeper connection closed. 2023-07-13 15:16:25,535 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:25,535 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): regionserver:34377-0x1015f41312f000b, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:25,536 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@674ae49d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@674ae49d 2023-07-13 15:16:25,536 INFO [Listener at localhost/37749] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-13 15:16:25,536 WARN [Listener at localhost/37749] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:25,542 INFO [Listener at localhost/37749] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:25,552 WARN [BP-1514897013-172.31.14.131-1689261351889 heartbeating to localhost/127.0.0.1:37375] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:25,552 WARN [BP-1514897013-172.31.14.131-1689261351889 heartbeating to localhost/127.0.0.1:37375] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1514897013-172.31.14.131-1689261351889 (Datanode Uuid 7de2614e-f73e-48f8-b6ee-51c4ba5eeedf) service to localhost/127.0.0.1:37375 2023-07-13 15:16:25,554 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/dfs/data/data5/current/BP-1514897013-172.31.14.131-1689261351889] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:25,554 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/dfs/data/data6/current/BP-1514897013-172.31.14.131-1689261351889] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:25,556 WARN [Listener at localhost/37749] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:25,558 INFO [Listener at localhost/37749] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:25,662 WARN [BP-1514897013-172.31.14.131-1689261351889 heartbeating to localhost/127.0.0.1:37375] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:25,662 WARN [BP-1514897013-172.31.14.131-1689261351889 heartbeating to localhost/127.0.0.1:37375] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1514897013-172.31.14.131-1689261351889 (Datanode Uuid c919d317-4c5f-434b-9ed6-3b0cc7a212cf) service to localhost/127.0.0.1:37375 2023-07-13 15:16:25,663 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/dfs/data/data3/current/BP-1514897013-172.31.14.131-1689261351889] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:25,663 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/dfs/data/data4/current/BP-1514897013-172.31.14.131-1689261351889] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:25,665 WARN [Listener at localhost/37749] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:25,667 INFO [Listener at localhost/37749] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:25,671 WARN [BP-1514897013-172.31.14.131-1689261351889 heartbeating to localhost/127.0.0.1:37375] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:25,671 WARN [BP-1514897013-172.31.14.131-1689261351889 heartbeating to localhost/127.0.0.1:37375] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1514897013-172.31.14.131-1689261351889 (Datanode Uuid fe0ce298-8b81-44b2-b5d5-733cee6fb2d7) service to localhost/127.0.0.1:37375 2023-07-13 15:16:25,672 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/dfs/data/data1/current/BP-1514897013-172.31.14.131-1689261351889] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:25,672 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/cluster_1f750d3d-0311-572c-d566-36dcd4d264c3/dfs/data/data2/current/BP-1514897013-172.31.14.131-1689261351889] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:25,691 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:25,691 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:25,691 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:25,702 INFO [Listener at localhost/37749] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:25,825 INFO [Listener at localhost/37749] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 15:16:25,882 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-13 15:16:25,882 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 15:16:25,882 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.log.dir so I do NOT create it in target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5 2023-07-13 15:16:25,882 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4393a52e-8045-b6bc-1ee4-b5c40e742ca0/hadoop.tmp.dir so I do NOT create it in target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5 2023-07-13 15:16:25,882 INFO [Listener at localhost/37749] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5, deleteOnExit=true 2023-07-13 15:16:25,882 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 15:16:25,882 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/test.cache.data in system properties and HBase conf 2023-07-13 15:16:25,882 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 15:16:25,882 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir in system properties and HBase conf 2023-07-13 15:16:25,883 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 15:16:25,883 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 15:16:25,883 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 15:16:25,883 DEBUG [Listener at localhost/37749] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 15:16:25,883 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 15:16:25,883 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 15:16:25,883 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 15:16:25,883 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 15:16:25,883 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 15:16:25,884 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 15:16:25,884 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 15:16:25,884 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 15:16:25,884 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 15:16:25,884 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/nfs.dump.dir in system properties and HBase conf 2023-07-13 15:16:25,884 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/java.io.tmpdir in system properties and HBase conf 2023-07-13 15:16:25,884 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 15:16:25,884 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 15:16:25,885 INFO [Listener at localhost/37749] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 15:16:25,889 WARN [Listener at localhost/37749] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 15:16:25,890 WARN [Listener at localhost/37749] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 15:16:25,920 DEBUG [Listener at localhost/37749-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015f41312f000a, quorum=127.0.0.1:52275, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-13 15:16:25,921 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015f41312f000a, quorum=127.0.0.1:52275, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-13 15:16:25,941 WARN [Listener at localhost/37749] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:25,943 INFO [Listener at localhost/37749] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:25,947 INFO [Listener at localhost/37749] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/java.io.tmpdir/Jetty_localhost_41991_hdfs____47fhum/webapp 2023-07-13 15:16:26,048 INFO [Listener at localhost/37749] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41991 2023-07-13 15:16:26,052 WARN [Listener at localhost/37749] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 15:16:26,052 WARN [Listener at localhost/37749] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 15:16:26,095 WARN [Listener at localhost/32909] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:26,109 WARN [Listener at localhost/32909] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:16:26,111 WARN [Listener at localhost/32909] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:26,112 INFO [Listener at localhost/32909] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:26,116 INFO [Listener at localhost/32909] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/java.io.tmpdir/Jetty_localhost_35599_datanode____pb9xwa/webapp 2023-07-13 15:16:26,210 INFO [Listener at localhost/32909] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35599 2023-07-13 15:16:26,221 WARN [Listener at localhost/46865] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:26,238 WARN [Listener at localhost/46865] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:16:26,240 WARN [Listener at localhost/46865] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:26,241 INFO [Listener at localhost/46865] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:26,245 INFO [Listener at localhost/46865] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/java.io.tmpdir/Jetty_localhost_42489_datanode____.eksmg2/webapp 2023-07-13 15:16:26,339 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x93db9a294428e81c: Processing first storage report for DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc from datanode a0dbed22-9ff1-4f8c-b2f9-67796d1c95c2 2023-07-13 15:16:26,339 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x93db9a294428e81c: from storage DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc node DatanodeRegistration(127.0.0.1:37181, datanodeUuid=a0dbed22-9ff1-4f8c-b2f9-67796d1c95c2, infoPort=34467, infoSecurePort=0, ipcPort=46865, storageInfo=lv=-57;cid=testClusterID;nsid=1698467874;c=1689261385892), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:26,340 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x93db9a294428e81c: Processing first storage report for DS-1ad760ac-31aa-488f-8dc2-9147c4f9a469 from datanode a0dbed22-9ff1-4f8c-b2f9-67796d1c95c2 2023-07-13 15:16:26,340 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x93db9a294428e81c: from storage DS-1ad760ac-31aa-488f-8dc2-9147c4f9a469 node DatanodeRegistration(127.0.0.1:37181, datanodeUuid=a0dbed22-9ff1-4f8c-b2f9-67796d1c95c2, infoPort=34467, infoSecurePort=0, ipcPort=46865, storageInfo=lv=-57;cid=testClusterID;nsid=1698467874;c=1689261385892), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:26,372 INFO [Listener at localhost/46865] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42489 2023-07-13 15:16:26,394 WARN [Listener at localhost/32975] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:26,425 WARN [Listener at localhost/32975] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-13 15:16:26,513 WARN [Listener at localhost/32975] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:16:26,522 WARN [Listener at localhost/32975] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:26,526 INFO [Listener at localhost/32975] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:26,530 INFO [Listener at localhost/32975] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/java.io.tmpdir/Jetty_localhost_44767_datanode____.19sh29/webapp 2023-07-13 15:16:26,547 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe8a20ee6fa596fcc: Processing first storage report for DS-3d217e43-df6e-49c6-8a32-6b0387eb861a from datanode edc2ad89-12f9-4a10-a912-3af1978a3336 2023-07-13 15:16:26,547 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe8a20ee6fa596fcc: from storage DS-3d217e43-df6e-49c6-8a32-6b0387eb861a node DatanodeRegistration(127.0.0.1:37689, datanodeUuid=edc2ad89-12f9-4a10-a912-3af1978a3336, infoPort=33961, infoSecurePort=0, ipcPort=32975, storageInfo=lv=-57;cid=testClusterID;nsid=1698467874;c=1689261385892), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:26,547 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe8a20ee6fa596fcc: Processing first storage report for DS-dd726753-26c9-47db-9385-e020ab697ae5 from datanode edc2ad89-12f9-4a10-a912-3af1978a3336 2023-07-13 15:16:26,547 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe8a20ee6fa596fcc: from storage DS-dd726753-26c9-47db-9385-e020ab697ae5 node DatanodeRegistration(127.0.0.1:37689, datanodeUuid=edc2ad89-12f9-4a10-a912-3af1978a3336, infoPort=33961, infoSecurePort=0, ipcPort=32975, storageInfo=lv=-57;cid=testClusterID;nsid=1698467874;c=1689261385892), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:26,632 INFO [Listener at localhost/32975] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44767 2023-07-13 15:16:26,642 WARN [Listener at localhost/34081] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:26,760 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb08da4359dab181: Processing first storage report for DS-4c0add0c-0d65-4b4f-bb2f-541d75859790 from datanode 76553b21-1526-4e07-b0f1-1119c8d2c999 2023-07-13 15:16:26,760 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb08da4359dab181: from storage DS-4c0add0c-0d65-4b4f-bb2f-541d75859790 node DatanodeRegistration(127.0.0.1:42985, datanodeUuid=76553b21-1526-4e07-b0f1-1119c8d2c999, infoPort=33865, infoSecurePort=0, ipcPort=34081, storageInfo=lv=-57;cid=testClusterID;nsid=1698467874;c=1689261385892), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:26,760 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb08da4359dab181: Processing first storage report for DS-ce12d796-d07e-4e63-814e-4555ad1dfd5f from datanode 76553b21-1526-4e07-b0f1-1119c8d2c999 2023-07-13 15:16:26,760 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb08da4359dab181: from storage DS-ce12d796-d07e-4e63-814e-4555ad1dfd5f node DatanodeRegistration(127.0.0.1:42985, datanodeUuid=76553b21-1526-4e07-b0f1-1119c8d2c999, infoPort=33865, infoSecurePort=0, ipcPort=34081, storageInfo=lv=-57;cid=testClusterID;nsid=1698467874;c=1689261385892), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:26,761 DEBUG [Listener at localhost/34081] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5 2023-07-13 15:16:26,763 INFO [Listener at localhost/34081] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5/zookeeper_0, clientPort=59953, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 15:16:26,766 INFO [Listener at localhost/34081] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59953 2023-07-13 15:16:26,766 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:26,767 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:26,790 INFO [Listener at localhost/34081] util.FSUtils(471): Created version file at hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19 with version=8 2023-07-13 15:16:26,791 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/hbase-staging 2023-07-13 15:16:26,792 DEBUG [Listener at localhost/34081] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 15:16:26,792 DEBUG [Listener at localhost/34081] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 15:16:26,792 DEBUG [Listener at localhost/34081] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 15:16:26,792 DEBUG [Listener at localhost/34081] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 15:16:26,793 INFO [Listener at localhost/34081] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:26,793 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:26,794 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:26,794 INFO [Listener at localhost/34081] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:26,794 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:26,794 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:26,794 INFO [Listener at localhost/34081] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:26,801 INFO [Listener at localhost/34081] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37719 2023-07-13 15:16:26,802 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:26,803 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:26,805 INFO [Listener at localhost/34081] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37719 connecting to ZooKeeper ensemble=127.0.0.1:59953 2023-07-13 15:16:26,814 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:377190x0, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:26,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37719-0x1015f41af0e0000 connected 2023-07-13 15:16:26,832 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:26,832 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:26,833 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:26,834 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37719 2023-07-13 15:16:26,835 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37719 2023-07-13 15:16:26,835 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37719 2023-07-13 15:16:26,835 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37719 2023-07-13 15:16:26,836 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37719 2023-07-13 15:16:26,838 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:26,838 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:26,838 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:26,838 INFO [Listener at localhost/34081] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 15:16:26,839 INFO [Listener at localhost/34081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:26,839 INFO [Listener at localhost/34081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:26,839 INFO [Listener at localhost/34081] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:26,839 INFO [Listener at localhost/34081] http.HttpServer(1146): Jetty bound to port 34121 2023-07-13 15:16:26,839 INFO [Listener at localhost/34081] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:26,842 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:26,843 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@44a9cf4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:26,843 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:26,843 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@625ee407{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:26,988 INFO [Listener at localhost/34081] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:26,990 INFO [Listener at localhost/34081] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:26,990 INFO [Listener at localhost/34081] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:26,990 INFO [Listener at localhost/34081] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:26,992 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:26,994 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3362b59a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/java.io.tmpdir/jetty-0_0_0_0-34121-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5489770093699104596/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:26,995 INFO [Listener at localhost/34081] server.AbstractConnector(333): Started ServerConnector@326eddd2{HTTP/1.1, (http/1.1)}{0.0.0.0:34121} 2023-07-13 15:16:26,995 INFO [Listener at localhost/34081] server.Server(415): Started @37076ms 2023-07-13 15:16:26,996 INFO [Listener at localhost/34081] master.HMaster(444): hbase.rootdir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19, hbase.cluster.distributed=false 2023-07-13 15:16:27,011 INFO [Listener at localhost/34081] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:27,011 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:27,011 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:27,011 INFO [Listener at localhost/34081] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:27,011 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:27,011 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:27,011 INFO [Listener at localhost/34081] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:27,013 INFO [Listener at localhost/34081] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41061 2023-07-13 15:16:27,013 INFO [Listener at localhost/34081] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:27,015 DEBUG [Listener at localhost/34081] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:27,015 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:27,017 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:27,018 INFO [Listener at localhost/34081] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41061 connecting to ZooKeeper ensemble=127.0.0.1:59953 2023-07-13 15:16:27,027 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:410610x0, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:27,029 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): regionserver:410610x0, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:27,030 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41061-0x1015f41af0e0001 connected 2023-07-13 15:16:27,031 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:27,032 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:27,032 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41061 2023-07-13 15:16:27,032 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41061 2023-07-13 15:16:27,042 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41061 2023-07-13 15:16:27,048 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41061 2023-07-13 15:16:27,049 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41061 2023-07-13 15:16:27,051 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:27,051 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:27,051 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:27,052 INFO [Listener at localhost/34081] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:27,052 INFO [Listener at localhost/34081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:27,052 INFO [Listener at localhost/34081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:27,052 INFO [Listener at localhost/34081] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:27,054 INFO [Listener at localhost/34081] http.HttpServer(1146): Jetty bound to port 42523 2023-07-13 15:16:27,054 INFO [Listener at localhost/34081] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:27,057 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:27,057 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3ee8c6a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:27,058 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:27,058 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@204cfa25{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:27,198 INFO [Listener at localhost/34081] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:27,199 INFO [Listener at localhost/34081] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:27,199 INFO [Listener at localhost/34081] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:27,200 INFO [Listener at localhost/34081] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:27,201 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:27,202 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@59f5ce37{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/java.io.tmpdir/jetty-0_0_0_0-42523-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7960990732060186334/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:27,203 INFO [Listener at localhost/34081] server.AbstractConnector(333): Started ServerConnector@23cf60b9{HTTP/1.1, (http/1.1)}{0.0.0.0:42523} 2023-07-13 15:16:27,203 INFO [Listener at localhost/34081] server.Server(415): Started @37283ms 2023-07-13 15:16:27,220 INFO [Listener at localhost/34081] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:27,220 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:27,220 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:27,221 INFO [Listener at localhost/34081] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:27,221 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:27,221 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:27,221 INFO [Listener at localhost/34081] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:27,222 INFO [Listener at localhost/34081] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45979 2023-07-13 15:16:27,223 INFO [Listener at localhost/34081] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:27,229 DEBUG [Listener at localhost/34081] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:27,229 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:27,231 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:27,232 INFO [Listener at localhost/34081] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45979 connecting to ZooKeeper ensemble=127.0.0.1:59953 2023-07-13 15:16:27,236 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:459790x0, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:27,237 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): regionserver:459790x0, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:27,238 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): regionserver:459790x0, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:27,239 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): regionserver:459790x0, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:27,247 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45979 2023-07-13 15:16:27,248 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45979-0x1015f41af0e0002 connected 2023-07-13 15:16:27,249 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45979 2023-07-13 15:16:27,249 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45979 2023-07-13 15:16:27,250 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45979 2023-07-13 15:16:27,250 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45979 2023-07-13 15:16:27,256 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:27,256 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:27,256 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:27,257 INFO [Listener at localhost/34081] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:27,257 INFO [Listener at localhost/34081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:27,258 INFO [Listener at localhost/34081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:27,258 INFO [Listener at localhost/34081] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:27,259 INFO [Listener at localhost/34081] http.HttpServer(1146): Jetty bound to port 42989 2023-07-13 15:16:27,259 INFO [Listener at localhost/34081] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:27,273 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:27,273 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@30ad32c0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:27,274 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:27,274 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@321cd692{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:27,407 INFO [Listener at localhost/34081] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:27,409 INFO [Listener at localhost/34081] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:27,409 INFO [Listener at localhost/34081] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:27,409 INFO [Listener at localhost/34081] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:27,410 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:27,412 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1fa6830d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/java.io.tmpdir/jetty-0_0_0_0-42989-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1412281943540765290/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:27,413 INFO [Listener at localhost/34081] server.AbstractConnector(333): Started ServerConnector@1a24290f{HTTP/1.1, (http/1.1)}{0.0.0.0:42989} 2023-07-13 15:16:27,413 INFO [Listener at localhost/34081] server.Server(415): Started @37493ms 2023-07-13 15:16:27,430 INFO [Listener at localhost/34081] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:27,431 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:27,431 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:27,431 INFO [Listener at localhost/34081] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:27,431 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:27,431 INFO [Listener at localhost/34081] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:27,431 INFO [Listener at localhost/34081] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:27,435 INFO [Listener at localhost/34081] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42367 2023-07-13 15:16:27,436 INFO [Listener at localhost/34081] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:27,447 DEBUG [Listener at localhost/34081] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:27,448 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:27,449 INFO [Listener at localhost/34081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:27,451 INFO [Listener at localhost/34081] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42367 connecting to ZooKeeper ensemble=127.0.0.1:59953 2023-07-13 15:16:27,456 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:423670x0, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:27,459 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42367-0x1015f41af0e0003 connected 2023-07-13 15:16:27,461 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:27,462 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:27,463 DEBUG [Listener at localhost/34081] zookeeper.ZKUtil(164): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:27,464 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42367 2023-07-13 15:16:27,465 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42367 2023-07-13 15:16:27,465 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42367 2023-07-13 15:16:27,466 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42367 2023-07-13 15:16:27,467 DEBUG [Listener at localhost/34081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42367 2023-07-13 15:16:27,469 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:27,469 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:27,469 INFO [Listener at localhost/34081] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:27,470 INFO [Listener at localhost/34081] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:27,471 INFO [Listener at localhost/34081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:27,471 INFO [Listener at localhost/34081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:27,471 INFO [Listener at localhost/34081] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:27,472 INFO [Listener at localhost/34081] http.HttpServer(1146): Jetty bound to port 45619 2023-07-13 15:16:27,472 INFO [Listener at localhost/34081] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:27,478 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:27,478 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@36cdc80c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:27,479 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:27,479 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2f728d8a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:27,636 INFO [Listener at localhost/34081] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:27,641 INFO [Listener at localhost/34081] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:27,641 INFO [Listener at localhost/34081] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:27,642 INFO [Listener at localhost/34081] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:27,643 INFO [Listener at localhost/34081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:27,644 INFO [Listener at localhost/34081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@54e4bf8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/java.io.tmpdir/jetty-0_0_0_0-45619-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1978908390235219726/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:27,645 INFO [Listener at localhost/34081] server.AbstractConnector(333): Started ServerConnector@28bc48f0{HTTP/1.1, (http/1.1)}{0.0.0.0:45619} 2023-07-13 15:16:27,646 INFO [Listener at localhost/34081] server.Server(415): Started @37726ms 2023-07-13 15:16:27,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:27,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3449362{HTTP/1.1, (http/1.1)}{0.0.0.0:38493} 2023-07-13 15:16:27,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37744ms 2023-07-13 15:16:27,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:27,666 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:27,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:27,669 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:27,669 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:27,669 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:27,671 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:27,671 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:27,671 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:27,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37719,1689261386793 from backup master directory 2023-07-13 15:16:27,675 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:27,676 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:27,676 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:27,676 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:27,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:27,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/hbase.id with ID: acc042f7-825e-464b-a893-b331db7df3f2 2023-07-13 15:16:27,719 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:27,722 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:27,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2cad65ac to 127.0.0.1:59953 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:27,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26da2112, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:27,746 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:27,746 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 15:16:27,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:27,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/data/master/store-tmp 2023-07-13 15:16:27,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:27,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:16:27,760 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:27,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:27,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:16:27,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:27,760 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:27,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:27,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/WALs/jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:27,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37719%2C1689261386793, suffix=, logDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/WALs/jenkins-hbase4.apache.org,37719,1689261386793, archiveDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/oldWALs, maxLogs=10 2023-07-13 15:16:27,781 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK] 2023-07-13 15:16:27,781 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK] 2023-07-13 15:16:27,828 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK] 2023-07-13 15:16:27,834 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/WALs/jenkins-hbase4.apache.org,37719,1689261386793/jenkins-hbase4.apache.org%2C37719%2C1689261386793.1689261387764 2023-07-13 15:16:27,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK], DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK], DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK]] 2023-07-13 15:16:27,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:27,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:27,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:27,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:27,837 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:27,839 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 15:16:27,840 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 15:16:27,840 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:27,841 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:27,842 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:27,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:27,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:27,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10826186400, jitterRate=0.008267179131507874}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:27,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:27,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 15:16:27,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 15:16:27,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 15:16:27,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 15:16:27,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-13 15:16:27,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-13 15:16:27,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 15:16:27,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 15:16:27,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 15:16:27,852 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 15:16:27,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 15:16:27,853 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 15:16:27,855 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:27,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 15:16:27,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 15:16:27,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 15:16:27,858 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:27,858 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:27,858 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:27,859 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:27,859 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:27,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37719,1689261386793, sessionid=0x1015f41af0e0000, setting cluster-up flag (Was=false) 2023-07-13 15:16:27,866 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:27,872 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 15:16:27,872 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:27,876 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:27,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 15:16:27,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:27,883 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.hbase-snapshot/.tmp 2023-07-13 15:16:27,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 15:16:27,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 15:16:27,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 15:16:27,888 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:27,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 15:16:27,889 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-13 15:16:27,890 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:27,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:27,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:27,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:27,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:27,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:27,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:27,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:27,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:27,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-13 15:16:27,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:27,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:27,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:27,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689261417907 2023-07-13 15:16:27,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 15:16:27,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 15:16:27,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 15:16:27,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 15:16:27,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 15:16:27,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 15:16:27,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:27,908 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:27,908 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 15:16:27,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 15:16:27,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 15:16:27,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 15:16:27,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 15:16:27,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 15:16:27,910 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261387910,5,FailOnTimeoutGroup] 2023-07-13 15:16:27,910 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261387910,5,FailOnTimeoutGroup] 2023-07-13 15:16:27,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:27,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 15:16:27,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:27,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:27,910 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:27,922 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:27,923 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:27,923 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19 2023-07-13 15:16:27,933 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:27,934 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:27,935 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/info 2023-07-13 15:16:27,936 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:27,937 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:27,937 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:27,939 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:27,939 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:27,940 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:27,940 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:27,941 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/table 2023-07-13 15:16:27,942 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:27,942 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:27,943 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740 2023-07-13 15:16:27,944 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740 2023-07-13 15:16:27,946 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:27,947 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:27,947 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(951): ClusterId : acc042f7-825e-464b-a893-b331db7df3f2 2023-07-13 15:16:27,947 INFO [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(951): ClusterId : acc042f7-825e-464b-a893-b331db7df3f2 2023-07-13 15:16:27,951 DEBUG [RS:1;jenkins-hbase4:45979] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:27,951 DEBUG [RS:0;jenkins-hbase4:41061] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:27,952 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:27,953 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9474053920, jitterRate=-0.11765997111797333}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:27,953 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:27,953 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:27,953 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:27,953 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:27,953 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:27,953 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:27,953 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:27,953 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:27,954 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:27,954 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 15:16:27,954 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 15:16:27,956 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 15:16:27,956 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(951): ClusterId : acc042f7-825e-464b-a893-b331db7df3f2 2023-07-13 15:16:27,958 DEBUG [RS:2;jenkins-hbase4:42367] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:27,958 DEBUG [RS:0;jenkins-hbase4:41061] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:27,958 DEBUG [RS:0;jenkins-hbase4:41061] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:27,959 DEBUG [RS:1;jenkins-hbase4:45979] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:27,959 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 15:16:27,959 DEBUG [RS:1;jenkins-hbase4:45979] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:27,961 DEBUG [RS:2;jenkins-hbase4:42367] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:27,961 DEBUG [RS:2;jenkins-hbase4:42367] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:27,962 DEBUG [RS:1;jenkins-hbase4:45979] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:27,962 DEBUG [RS:0;jenkins-hbase4:41061] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:27,963 DEBUG [RS:1;jenkins-hbase4:45979] zookeeper.ReadOnlyZKClient(139): Connect 0x0067e945 to 127.0.0.1:59953 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:27,963 DEBUG [RS:0;jenkins-hbase4:41061] zookeeper.ReadOnlyZKClient(139): Connect 0x2babce8f to 127.0.0.1:59953 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:27,964 DEBUG [RS:2;jenkins-hbase4:42367] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:27,972 DEBUG [RS:2;jenkins-hbase4:42367] zookeeper.ReadOnlyZKClient(139): Connect 0x24e91c32 to 127.0.0.1:59953 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:27,978 DEBUG [RS:1;jenkins-hbase4:45979] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ea01739, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:27,978 DEBUG [RS:0;jenkins-hbase4:41061] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@256c358a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:27,978 DEBUG [RS:0;jenkins-hbase4:41061] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64513fb1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:27,978 DEBUG [RS:1;jenkins-hbase4:45979] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4f9d1d02, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:27,982 DEBUG [RS:2;jenkins-hbase4:42367] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a2ef6df, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:27,982 DEBUG [RS:2;jenkins-hbase4:42367] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e117742, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:27,990 DEBUG [RS:1;jenkins-hbase4:45979] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:45979 2023-07-13 15:16:27,990 INFO [RS:1;jenkins-hbase4:45979] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:27,990 INFO [RS:1;jenkins-hbase4:45979] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:27,990 DEBUG [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:27,990 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37719,1689261386793 with isa=jenkins-hbase4.apache.org/172.31.14.131:45979, startcode=1689261387219 2023-07-13 15:16:27,991 DEBUG [RS:1;jenkins-hbase4:45979] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:27,991 DEBUG [RS:2;jenkins-hbase4:42367] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:42367 2023-07-13 15:16:27,991 INFO [RS:2;jenkins-hbase4:42367] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:27,991 INFO [RS:2;jenkins-hbase4:42367] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:27,991 DEBUG [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:27,991 DEBUG [RS:0;jenkins-hbase4:41061] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:41061 2023-07-13 15:16:27,991 INFO [RS:0;jenkins-hbase4:41061] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:27,991 INFO [RS:0;jenkins-hbase4:41061] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:27,991 DEBUG [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:27,992 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37719,1689261386793 with isa=jenkins-hbase4.apache.org/172.31.14.131:42367, startcode=1689261387430 2023-07-13 15:16:27,992 INFO [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37719,1689261386793 with isa=jenkins-hbase4.apache.org/172.31.14.131:41061, startcode=1689261387010 2023-07-13 15:16:27,992 DEBUG [RS:2;jenkins-hbase4:42367] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:27,992 DEBUG [RS:0;jenkins-hbase4:41061] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:27,994 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59521, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:27,994 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51597, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:27,995 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37719] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:27,994 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60707, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:27,995 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:27,996 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37719] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:27,996 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 15:16:27,996 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:27,996 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 15:16:27,996 DEBUG [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19 2023-07-13 15:16:27,996 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37719] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:27,996 DEBUG [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32909 2023-07-13 15:16:27,997 DEBUG [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19 2023-07-13 15:16:27,997 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:27,997 DEBUG [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32909 2023-07-13 15:16:27,997 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 15:16:27,997 DEBUG [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34121 2023-07-13 15:16:27,997 DEBUG [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34121 2023-07-13 15:16:27,997 DEBUG [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19 2023-07-13 15:16:27,997 DEBUG [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:32909 2023-07-13 15:16:27,997 DEBUG [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34121 2023-07-13 15:16:28,005 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:28,007 DEBUG [RS:1;jenkins-hbase4:45979] zookeeper.ZKUtil(162): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,007 DEBUG [RS:2;jenkins-hbase4:42367] zookeeper.ZKUtil(162): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,007 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41061,1689261387010] 2023-07-13 15:16:28,007 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45979,1689261387219] 2023-07-13 15:16:28,007 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42367,1689261387430] 2023-07-13 15:16:28,007 WARN [RS:1;jenkins-hbase4:45979] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:28,007 WARN [RS:2;jenkins-hbase4:42367] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:28,007 INFO [RS:1;jenkins-hbase4:45979] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:28,007 INFO [RS:2;jenkins-hbase4:42367] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:28,007 DEBUG [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,007 DEBUG [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,008 DEBUG [RS:0;jenkins-hbase4:41061] zookeeper.ZKUtil(162): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:28,008 WARN [RS:0;jenkins-hbase4:41061] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:28,008 INFO [RS:0;jenkins-hbase4:41061] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:28,008 DEBUG [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1948): logDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:28,014 DEBUG [RS:1;jenkins-hbase4:45979] zookeeper.ZKUtil(162): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,014 DEBUG [RS:2;jenkins-hbase4:42367] zookeeper.ZKUtil(162): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,014 DEBUG [RS:1;jenkins-hbase4:45979] zookeeper.ZKUtil(162): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:28,014 DEBUG [RS:0;jenkins-hbase4:41061] zookeeper.ZKUtil(162): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,014 DEBUG [RS:2;jenkins-hbase4:42367] zookeeper.ZKUtil(162): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:28,015 DEBUG [RS:1;jenkins-hbase4:45979] zookeeper.ZKUtil(162): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,015 DEBUG [RS:0;jenkins-hbase4:41061] zookeeper.ZKUtil(162): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:28,015 DEBUG [RS:2;jenkins-hbase4:42367] zookeeper.ZKUtil(162): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,015 DEBUG [RS:0;jenkins-hbase4:41061] zookeeper.ZKUtil(162): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,016 DEBUG [RS:1;jenkins-hbase4:45979] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:28,016 DEBUG [RS:2;jenkins-hbase4:42367] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:28,016 INFO [RS:1;jenkins-hbase4:45979] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:28,016 INFO [RS:2;jenkins-hbase4:42367] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:28,016 DEBUG [RS:0;jenkins-hbase4:41061] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:28,017 INFO [RS:0;jenkins-hbase4:41061] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:28,017 INFO [RS:1;jenkins-hbase4:45979] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:28,017 INFO [RS:1;jenkins-hbase4:45979] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:28,017 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,018 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:28,019 INFO [RS:2;jenkins-hbase4:42367] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:28,019 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,020 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,020 INFO [RS:2;jenkins-hbase4:42367] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:28,020 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,020 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,020 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,020 INFO [RS:0;jenkins-hbase4:41061] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:28,020 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,020 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:28,020 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,021 INFO [RS:0;jenkins-hbase4:41061] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:28,021 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:28,021 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,021 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,021 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,021 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,021 INFO [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:28,021 DEBUG [RS:1;jenkins-hbase4:45979] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,022 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,022 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,022 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,022 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,022 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,023 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,023 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,023 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,023 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,023 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,023 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,023 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,023 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:28,024 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:28,024 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:2;jenkins-hbase4:42367] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,024 DEBUG [RS:0;jenkins-hbase4:41061] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:28,025 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,025 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,026 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,026 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,026 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,026 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,026 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,026 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,035 INFO [RS:1;jenkins-hbase4:45979] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:28,035 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45979,1689261387219-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,036 INFO [RS:0;jenkins-hbase4:41061] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:28,036 INFO [RS:2;jenkins-hbase4:42367] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:28,036 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41061,1689261387010-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,036 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42367,1689261387430-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,045 INFO [RS:1;jenkins-hbase4:45979] regionserver.Replication(203): jenkins-hbase4.apache.org,45979,1689261387219 started 2023-07-13 15:16:28,045 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45979,1689261387219, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45979, sessionid=0x1015f41af0e0002 2023-07-13 15:16:28,045 DEBUG [RS:1;jenkins-hbase4:45979] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:28,045 DEBUG [RS:1;jenkins-hbase4:45979] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,045 DEBUG [RS:1;jenkins-hbase4:45979] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45979,1689261387219' 2023-07-13 15:16:28,045 DEBUG [RS:1;jenkins-hbase4:45979] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:28,046 DEBUG [RS:1;jenkins-hbase4:45979] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:28,046 INFO [RS:0;jenkins-hbase4:41061] regionserver.Replication(203): jenkins-hbase4.apache.org,41061,1689261387010 started 2023-07-13 15:16:28,046 INFO [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41061,1689261387010, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41061, sessionid=0x1015f41af0e0001 2023-07-13 15:16:28,046 DEBUG [RS:1;jenkins-hbase4:45979] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:28,046 DEBUG [RS:0;jenkins-hbase4:41061] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:28,046 DEBUG [RS:1;jenkins-hbase4:45979] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:28,046 DEBUG [RS:1;jenkins-hbase4:45979] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,046 DEBUG [RS:1;jenkins-hbase4:45979] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45979,1689261387219' 2023-07-13 15:16:28,046 INFO [RS:2;jenkins-hbase4:42367] regionserver.Replication(203): jenkins-hbase4.apache.org,42367,1689261387430 started 2023-07-13 15:16:28,046 DEBUG [RS:0;jenkins-hbase4:41061] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:28,046 DEBUG [RS:0;jenkins-hbase4:41061] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41061,1689261387010' 2023-07-13 15:16:28,046 DEBUG [RS:0;jenkins-hbase4:41061] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:28,046 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42367,1689261387430, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42367, sessionid=0x1015f41af0e0003 2023-07-13 15:16:28,046 DEBUG [RS:1;jenkins-hbase4:45979] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:28,046 DEBUG [RS:2;jenkins-hbase4:42367] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:28,047 DEBUG [RS:2;jenkins-hbase4:42367] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,047 DEBUG [RS:2;jenkins-hbase4:42367] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42367,1689261387430' 2023-07-13 15:16:28,047 DEBUG [RS:2;jenkins-hbase4:42367] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:28,047 DEBUG [RS:0;jenkins-hbase4:41061] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:28,047 DEBUG [RS:1;jenkins-hbase4:45979] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:28,047 DEBUG [RS:2;jenkins-hbase4:42367] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:28,047 DEBUG [RS:0;jenkins-hbase4:41061] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:28,047 DEBUG [RS:2;jenkins-hbase4:42367] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:28,047 DEBUG [RS:1;jenkins-hbase4:45979] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:28,047 DEBUG [RS:2;jenkins-hbase4:42367] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:28,047 DEBUG [RS:2;jenkins-hbase4:42367] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,047 DEBUG [RS:2;jenkins-hbase4:42367] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42367,1689261387430' 2023-07-13 15:16:28,047 DEBUG [RS:2;jenkins-hbase4:42367] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:28,047 DEBUG [RS:0;jenkins-hbase4:41061] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:28,047 DEBUG [RS:0;jenkins-hbase4:41061] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:28,047 DEBUG [RS:0;jenkins-hbase4:41061] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41061,1689261387010' 2023-07-13 15:16:28,047 DEBUG [RS:0;jenkins-hbase4:41061] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:28,047 INFO [RS:1;jenkins-hbase4:45979] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 15:16:28,048 DEBUG [RS:2;jenkins-hbase4:42367] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:28,048 DEBUG [RS:0;jenkins-hbase4:41061] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:28,048 DEBUG [RS:2;jenkins-hbase4:42367] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:28,048 INFO [RS:2;jenkins-hbase4:42367] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 15:16:28,048 DEBUG [RS:0;jenkins-hbase4:41061] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:28,048 INFO [RS:0;jenkins-hbase4:41061] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 15:16:28,050 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,050 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,050 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,050 DEBUG [RS:0;jenkins-hbase4:41061] zookeeper.ZKUtil(398): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 15:16:28,050 DEBUG [RS:1;jenkins-hbase4:45979] zookeeper.ZKUtil(398): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 15:16:28,050 INFO [RS:0;jenkins-hbase4:41061] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 15:16:28,050 DEBUG [RS:2;jenkins-hbase4:42367] zookeeper.ZKUtil(398): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 15:16:28,050 INFO [RS:1;jenkins-hbase4:45979] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 15:16:28,050 INFO [RS:2;jenkins-hbase4:42367] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 15:16:28,051 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,051 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,051 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,051 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,051 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,051 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,109 DEBUG [jenkins-hbase4:37719] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 15:16:28,109 DEBUG [jenkins-hbase4:37719] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:28,109 DEBUG [jenkins-hbase4:37719] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:28,109 DEBUG [jenkins-hbase4:37719] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:28,109 DEBUG [jenkins-hbase4:37719] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:28,110 DEBUG [jenkins-hbase4:37719] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:28,111 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42367,1689261387430, state=OPENING 2023-07-13 15:16:28,113 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 15:16:28,114 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:28,117 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42367,1689261387430}] 2023-07-13 15:16:28,117 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:28,155 INFO [RS:2;jenkins-hbase4:42367] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42367%2C1689261387430, suffix=, logDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,42367,1689261387430, archiveDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/oldWALs, maxLogs=32 2023-07-13 15:16:28,155 INFO [RS:1;jenkins-hbase4:45979] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45979%2C1689261387219, suffix=, logDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,45979,1689261387219, archiveDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/oldWALs, maxLogs=32 2023-07-13 15:16:28,155 INFO [RS:0;jenkins-hbase4:41061] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41061%2C1689261387010, suffix=, logDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,41061,1689261387010, archiveDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/oldWALs, maxLogs=32 2023-07-13 15:16:28,175 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK] 2023-07-13 15:16:28,175 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK] 2023-07-13 15:16:28,176 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK] 2023-07-13 15:16:28,179 INFO [RS:2;jenkins-hbase4:42367] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,42367,1689261387430/jenkins-hbase4.apache.org%2C42367%2C1689261387430.1689261388156 2023-07-13 15:16:28,180 DEBUG [RS:2;jenkins-hbase4:42367] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK], DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK], DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK]] 2023-07-13 15:16:28,185 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK] 2023-07-13 15:16:28,185 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK] 2023-07-13 15:16:28,185 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK] 2023-07-13 15:16:28,187 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK] 2023-07-13 15:16:28,187 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK] 2023-07-13 15:16:28,187 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK] 2023-07-13 15:16:28,191 INFO [RS:0;jenkins-hbase4:41061] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,41061,1689261387010/jenkins-hbase4.apache.org%2C41061%2C1689261387010.1689261388163 2023-07-13 15:16:28,191 INFO [RS:1;jenkins-hbase4:45979] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,45979,1689261387219/jenkins-hbase4.apache.org%2C45979%2C1689261387219.1689261388163 2023-07-13 15:16:28,191 DEBUG [RS:0;jenkins-hbase4:41061] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK], DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK], DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK]] 2023-07-13 15:16:28,192 DEBUG [RS:1;jenkins-hbase4:45979] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK], DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK], DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK]] 2023-07-13 15:16:28,199 WARN [ReadOnlyZKClient-127.0.0.1:59953@0x2cad65ac] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 15:16:28,199 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37719,1689261386793] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:28,200 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56094, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:28,200 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42367] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:56094 deadline: 1689261448200, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,271 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,272 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:28,274 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56096, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:28,278 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 15:16:28,278 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:28,280 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42367%2C1689261387430.meta, suffix=.meta, logDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,42367,1689261387430, archiveDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/oldWALs, maxLogs=32 2023-07-13 15:16:28,295 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK] 2023-07-13 15:16:28,296 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK] 2023-07-13 15:16:28,296 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK] 2023-07-13 15:16:28,299 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,42367,1689261387430/jenkins-hbase4.apache.org%2C42367%2C1689261387430.meta.1689261388281.meta 2023-07-13 15:16:28,299 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42985,DS-4c0add0c-0d65-4b4f-bb2f-541d75859790,DISK], DatanodeInfoWithStorage[127.0.0.1:37689,DS-3d217e43-df6e-49c6-8a32-6b0387eb861a,DISK], DatanodeInfoWithStorage[127.0.0.1:37181,DS-fd7463ff-0bf9-4971-a0e5-fb4c6dc7b2cc,DISK]] 2023-07-13 15:16:28,299 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:28,299 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:28,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 15:16:28,300 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 15:16:28,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 15:16:28,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:28,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 15:16:28,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 15:16:28,301 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:28,303 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/info 2023-07-13 15:16:28,303 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/info 2023-07-13 15:16:28,303 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:28,304 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:28,304 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:28,305 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:28,305 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:28,305 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:28,306 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:28,306 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:28,306 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/table 2023-07-13 15:16:28,307 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/table 2023-07-13 15:16:28,307 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:28,307 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:28,308 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740 2023-07-13 15:16:28,310 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740 2023-07-13 15:16:28,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:28,314 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:28,315 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11023720160, jitterRate=0.026663944125175476}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:28,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:28,315 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689261388271 2023-07-13 15:16:28,320 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 15:16:28,320 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 15:16:28,321 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42367,1689261387430, state=OPEN 2023-07-13 15:16:28,322 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:28,322 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:28,323 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 15:16:28,324 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42367,1689261387430 in 207 msec 2023-07-13 15:16:28,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 15:16:28,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 370 msec 2023-07-13 15:16:28,327 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 436 msec 2023-07-13 15:16:28,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689261388327, completionTime=-1 2023-07-13 15:16:28,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 15:16:28,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 15:16:28,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 15:16:28,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689261448331 2023-07-13 15:16:28,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689261508331 2023-07-13 15:16:28,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-13 15:16:28,337 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37719,1689261386793-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37719,1689261386793-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37719,1689261386793-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37719, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 15:16:28,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:28,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 15:16:28,339 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 15:16:28,340 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:28,341 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:28,342 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:28,343 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f empty. 2023-07-13 15:16:28,343 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:28,343 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 15:16:28,356 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:28,357 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 241df9834cd5e4d861b355d94be84e0f, NAME => 'hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp 2023-07-13 15:16:28,366 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:28,366 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 241df9834cd5e4d861b355d94be84e0f, disabling compactions & flushes 2023-07-13 15:16:28,366 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:28,366 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:28,366 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. after waiting 0 ms 2023-07-13 15:16:28,366 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:28,366 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:28,366 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 241df9834cd5e4d861b355d94be84e0f: 2023-07-13 15:16:28,368 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:28,369 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261388369"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261388369"}]},"ts":"1689261388369"} 2023-07-13 15:16:28,371 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:28,372 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:28,372 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261388372"}]},"ts":"1689261388372"} 2023-07-13 15:16:28,373 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 15:16:28,376 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:28,377 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:28,377 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:28,377 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:28,377 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:28,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=241df9834cd5e4d861b355d94be84e0f, ASSIGN}] 2023-07-13 15:16:28,381 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=241df9834cd5e4d861b355d94be84e0f, ASSIGN 2023-07-13 15:16:28,381 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=241df9834cd5e4d861b355d94be84e0f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45979,1689261387219; forceNewPlan=false, retain=false 2023-07-13 15:16:28,504 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37719,1689261386793] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:28,507 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37719,1689261386793] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 15:16:28,509 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:28,509 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:28,511 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:28,511 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f empty. 2023-07-13 15:16:28,512 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:28,512 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 15:16:28,524 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:28,526 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e0f2e1746ce9f2ed0f4c6f079fbb7e4f, NAME => 'hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp 2023-07-13 15:16:28,532 INFO [jenkins-hbase4:37719] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:28,533 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=241df9834cd5e4d861b355d94be84e0f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,533 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261388533"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261388533"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261388533"}]},"ts":"1689261388533"} 2023-07-13 15:16:28,535 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 241df9834cd5e4d861b355d94be84e0f, server=jenkins-hbase4.apache.org,45979,1689261387219}] 2023-07-13 15:16:28,538 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:28,538 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e0f2e1746ce9f2ed0f4c6f079fbb7e4f, disabling compactions & flushes 2023-07-13 15:16:28,538 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:28,538 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:28,538 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. after waiting 0 ms 2023-07-13 15:16:28,538 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:28,538 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:28,538 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e0f2e1746ce9f2ed0f4c6f079fbb7e4f: 2023-07-13 15:16:28,540 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:28,541 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261388541"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261388541"}]},"ts":"1689261388541"} 2023-07-13 15:16:28,542 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:28,543 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:28,543 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261388543"}]},"ts":"1689261388543"} 2023-07-13 15:16:28,544 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 15:16:28,548 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:28,549 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:28,549 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:28,549 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:28,549 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:28,549 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e0f2e1746ce9f2ed0f4c6f079fbb7e4f, ASSIGN}] 2023-07-13 15:16:28,550 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e0f2e1746ce9f2ed0f4c6f079fbb7e4f, ASSIGN 2023-07-13 15:16:28,550 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e0f2e1746ce9f2ed0f4c6f079fbb7e4f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42367,1689261387430; forceNewPlan=false, retain=false 2023-07-13 15:16:28,688 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,688 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:28,690 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41094, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:28,694 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:28,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 241df9834cd5e4d861b355d94be84e0f, NAME => 'hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:28,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:28,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:28,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:28,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:28,696 INFO [StoreOpener-241df9834cd5e4d861b355d94be84e0f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:28,698 DEBUG [StoreOpener-241df9834cd5e4d861b355d94be84e0f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f/info 2023-07-13 15:16:28,698 DEBUG [StoreOpener-241df9834cd5e4d861b355d94be84e0f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f/info 2023-07-13 15:16:28,698 INFO [StoreOpener-241df9834cd5e4d861b355d94be84e0f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 241df9834cd5e4d861b355d94be84e0f columnFamilyName info 2023-07-13 15:16:28,699 INFO [StoreOpener-241df9834cd5e4d861b355d94be84e0f-1] regionserver.HStore(310): Store=241df9834cd5e4d861b355d94be84e0f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:28,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:28,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:28,700 INFO [jenkins-hbase4:37719] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:28,702 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e0f2e1746ce9f2ed0f4c6f079fbb7e4f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,702 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261388702"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261388702"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261388702"}]},"ts":"1689261388702"} 2023-07-13 15:16:28,703 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure e0f2e1746ce9f2ed0f4c6f079fbb7e4f, server=jenkins-hbase4.apache.org,42367,1689261387430}] 2023-07-13 15:16:28,704 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:28,707 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:28,708 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 241df9834cd5e4d861b355d94be84e0f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11991795520, jitterRate=0.1168229877948761}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:28,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 241df9834cd5e4d861b355d94be84e0f: 2023-07-13 15:16:28,708 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f., pid=7, masterSystemTime=1689261388688 2023-07-13 15:16:28,711 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:28,712 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:28,712 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=241df9834cd5e4d861b355d94be84e0f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:28,712 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261388712"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261388712"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261388712"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261388712"}]},"ts":"1689261388712"} 2023-07-13 15:16:28,715 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-13 15:16:28,715 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 241df9834cd5e4d861b355d94be84e0f, server=jenkins-hbase4.apache.org,45979,1689261387219 in 179 msec 2023-07-13 15:16:28,716 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-13 15:16:28,716 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=241df9834cd5e4d861b355d94be84e0f, ASSIGN in 338 msec 2023-07-13 15:16:28,717 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:28,717 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261388717"}]},"ts":"1689261388717"} 2023-07-13 15:16:28,718 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 15:16:28,720 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:28,722 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 382 msec 2023-07-13 15:16:28,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 15:16:28,741 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:28,741 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:28,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:28,746 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41100, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:28,750 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 15:16:28,757 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:28,762 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-13 15:16:28,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 15:16:28,775 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-13 15:16:28,775 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 15:16:28,859 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:28,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e0f2e1746ce9f2ed0f4c6f079fbb7e4f, NAME => 'hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:28,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:28,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. service=MultiRowMutationService 2023-07-13 15:16:28,860 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 15:16:28,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:28,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:28,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:28,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:28,861 INFO [StoreOpener-e0f2e1746ce9f2ed0f4c6f079fbb7e4f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:28,863 DEBUG [StoreOpener-e0f2e1746ce9f2ed0f4c6f079fbb7e4f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f/m 2023-07-13 15:16:28,863 DEBUG [StoreOpener-e0f2e1746ce9f2ed0f4c6f079fbb7e4f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f/m 2023-07-13 15:16:28,864 INFO [StoreOpener-e0f2e1746ce9f2ed0f4c6f079fbb7e4f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e0f2e1746ce9f2ed0f4c6f079fbb7e4f columnFamilyName m 2023-07-13 15:16:28,865 INFO [StoreOpener-e0f2e1746ce9f2ed0f4c6f079fbb7e4f-1] regionserver.HStore(310): Store=e0f2e1746ce9f2ed0f4c6f079fbb7e4f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:28,866 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:28,866 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:28,869 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:28,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:28,871 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e0f2e1746ce9f2ed0f4c6f079fbb7e4f; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2a7d5dac, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:28,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e0f2e1746ce9f2ed0f4c6f079fbb7e4f: 2023-07-13 15:16:28,872 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f., pid=9, masterSystemTime=1689261388855 2023-07-13 15:16:28,874 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:28,874 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:28,874 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e0f2e1746ce9f2ed0f4c6f079fbb7e4f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:28,874 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261388874"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261388874"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261388874"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261388874"}]},"ts":"1689261388874"} 2023-07-13 15:16:28,877 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-13 15:16:28,877 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure e0f2e1746ce9f2ed0f4c6f079fbb7e4f, server=jenkins-hbase4.apache.org,42367,1689261387430 in 173 msec 2023-07-13 15:16:28,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-13 15:16:28,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e0f2e1746ce9f2ed0f4c6f079fbb7e4f, ASSIGN in 328 msec 2023-07-13 15:16:28,905 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:28,909 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 137 msec 2023-07-13 15:16:28,910 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:28,910 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261388910"}]},"ts":"1689261388910"} 2023-07-13 15:16:28,911 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 15:16:28,914 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:28,915 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 409 msec 2023-07-13 15:16:28,918 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 15:16:28,921 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 15:16:28,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.245sec 2023-07-13 15:16:28,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-13 15:16:28,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:28,922 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-13 15:16:28,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-13 15:16:28,924 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:28,925 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:28,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-13 15:16:28,927 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:28,928 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b empty. 2023-07-13 15:16:28,929 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:28,929 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-13 15:16:28,932 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-13 15:16:28,932 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-13 15:16:28,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:28,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 15:16:28,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 15:16:28,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37719,1689261386793-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 15:16:28,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37719,1689261386793-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 15:16:28,943 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 15:16:28,952 DEBUG [Listener at localhost/34081] zookeeper.ReadOnlyZKClient(139): Connect 0x277c98ff to 127.0.0.1:59953 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:28,953 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:28,960 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 20d883a40da87a6f7c37515b6a04598b, NAME => 'hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp 2023-07-13 15:16:28,963 DEBUG [Listener at localhost/34081] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a1b9d4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:28,965 DEBUG [hconnection-0x6b7167b7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:28,968 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56110, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:28,970 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:28,970 INFO [Listener at localhost/34081] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:28,975 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:28,975 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 20d883a40da87a6f7c37515b6a04598b, disabling compactions & flushes 2023-07-13 15:16:28,975 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:28,975 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:28,975 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. after waiting 0 ms 2023-07-13 15:16:28,975 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:28,975 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:28,975 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 20d883a40da87a6f7c37515b6a04598b: 2023-07-13 15:16:28,978 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:28,979 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261388979"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261388979"}]},"ts":"1689261388979"} 2023-07-13 15:16:28,980 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:28,981 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:28,981 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261388981"}]},"ts":"1689261388981"} 2023-07-13 15:16:28,982 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-13 15:16:28,986 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:28,986 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:28,986 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:28,986 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:28,986 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:28,987 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=20d883a40da87a6f7c37515b6a04598b, ASSIGN}] 2023-07-13 15:16:28,987 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=20d883a40da87a6f7c37515b6a04598b, ASSIGN 2023-07-13 15:16:28,988 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=20d883a40da87a6f7c37515b6a04598b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42367,1689261387430; forceNewPlan=false, retain=false 2023-07-13 15:16:29,010 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 15:16:29,010 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 15:16:29,014 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:29,014 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:29,017 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:29,019 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37719,1689261386793] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 15:16:29,073 DEBUG [Listener at localhost/34081] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 15:16:29,075 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60858, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 15:16:29,078 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 15:16:29,078 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:29,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-13 15:16:29,079 DEBUG [Listener at localhost/34081] zookeeper.ReadOnlyZKClient(139): Connect 0x548b4587 to 127.0.0.1:59953 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:29,085 DEBUG [Listener at localhost/34081] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34a7cc7e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:29,085 INFO [Listener at localhost/34081] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59953 2023-07-13 15:16:29,087 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:29,088 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015f41af0e000a connected 2023-07-13 15:16:29,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-13 15:16:29,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-13 15:16:29,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-13 15:16:29,105 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:29,108 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 16 msec 2023-07-13 15:16:29,138 INFO [jenkins-hbase4:37719] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:29,140 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=20d883a40da87a6f7c37515b6a04598b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:29,140 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261389140"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261389140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261389140"}]},"ts":"1689261389140"} 2023-07-13 15:16:29,141 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure 20d883a40da87a6f7c37515b6a04598b, server=jenkins-hbase4.apache.org,42367,1689261387430}] 2023-07-13 15:16:29,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-13 15:16:29,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:29,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-13 15:16:29,206 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:29,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-13 15:16:29,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 15:16:29,208 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:29,209 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:29,210 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:29,212 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:29,213 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726 empty. 2023-07-13 15:16:29,213 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:29,213 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-13 15:16:29,226 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:29,227 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => abe48f9bc98c5ca19cab453c19cdc726, NAME => 'np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp 2023-07-13 15:16:29,235 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:29,236 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing abe48f9bc98c5ca19cab453c19cdc726, disabling compactions & flushes 2023-07-13 15:16:29,236 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:29,236 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:29,236 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. after waiting 0 ms 2023-07-13 15:16:29,236 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:29,236 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:29,236 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for abe48f9bc98c5ca19cab453c19cdc726: 2023-07-13 15:16:29,238 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:29,239 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261389239"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261389239"}]},"ts":"1689261389239"} 2023-07-13 15:16:29,240 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:29,241 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:29,241 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261389241"}]},"ts":"1689261389241"} 2023-07-13 15:16:29,242 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-13 15:16:29,245 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:29,245 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:29,245 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:29,245 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:29,245 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:29,246 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=abe48f9bc98c5ca19cab453c19cdc726, ASSIGN}] 2023-07-13 15:16:29,246 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=abe48f9bc98c5ca19cab453c19cdc726, ASSIGN 2023-07-13 15:16:29,247 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=abe48f9bc98c5ca19cab453c19cdc726, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41061,1689261387010; forceNewPlan=false, retain=false 2023-07-13 15:16:29,296 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:29,296 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 20d883a40da87a6f7c37515b6a04598b, NAME => 'hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:29,296 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:29,296 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:29,296 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:29,296 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:29,297 INFO [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:29,299 DEBUG [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b/q 2023-07-13 15:16:29,299 DEBUG [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b/q 2023-07-13 15:16:29,299 INFO [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 20d883a40da87a6f7c37515b6a04598b columnFamilyName q 2023-07-13 15:16:29,300 INFO [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] regionserver.HStore(310): Store=20d883a40da87a6f7c37515b6a04598b/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:29,300 INFO [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:29,301 DEBUG [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b/u 2023-07-13 15:16:29,301 DEBUG [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b/u 2023-07-13 15:16:29,301 INFO [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 20d883a40da87a6f7c37515b6a04598b columnFamilyName u 2023-07-13 15:16:29,302 INFO [StoreOpener-20d883a40da87a6f7c37515b6a04598b-1] regionserver.HStore(310): Store=20d883a40da87a6f7c37515b6a04598b/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:29,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:29,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:29,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-13 15:16:29,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:29,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 15:16:29,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:29,309 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 20d883a40da87a6f7c37515b6a04598b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11637941760, jitterRate=0.08386778831481934}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-13 15:16:29,309 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 20d883a40da87a6f7c37515b6a04598b: 2023-07-13 15:16:29,310 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b., pid=15, masterSystemTime=1689261389292 2023-07-13 15:16:29,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:29,312 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:29,312 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=20d883a40da87a6f7c37515b6a04598b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:29,313 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261389312"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261389312"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261389312"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261389312"}]},"ts":"1689261389312"} 2023-07-13 15:16:29,315 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-13 15:16:29,315 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure 20d883a40da87a6f7c37515b6a04598b, server=jenkins-hbase4.apache.org,42367,1689261387430 in 173 msec 2023-07-13 15:16:29,317 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-13 15:16:29,317 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=20d883a40da87a6f7c37515b6a04598b, ASSIGN in 329 msec 2023-07-13 15:16:29,318 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:29,318 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261389318"}]},"ts":"1689261389318"} 2023-07-13 15:16:29,319 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-13 15:16:29,322 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:29,324 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 401 msec 2023-07-13 15:16:29,397 INFO [jenkins-hbase4:37719] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:29,399 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=abe48f9bc98c5ca19cab453c19cdc726, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:29,399 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261389399"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261389399"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261389399"}]},"ts":"1689261389399"} 2023-07-13 15:16:29,400 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure abe48f9bc98c5ca19cab453c19cdc726, server=jenkins-hbase4.apache.org,41061,1689261387010}] 2023-07-13 15:16:29,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 15:16:29,552 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:29,553 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:29,554 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46186, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:29,558 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:29,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => abe48f9bc98c5ca19cab453c19cdc726, NAME => 'np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:29,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:29,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:29,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:29,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:29,560 INFO [StoreOpener-abe48f9bc98c5ca19cab453c19cdc726-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:29,561 DEBUG [StoreOpener-abe48f9bc98c5ca19cab453c19cdc726-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726/fam1 2023-07-13 15:16:29,561 DEBUG [StoreOpener-abe48f9bc98c5ca19cab453c19cdc726-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726/fam1 2023-07-13 15:16:29,562 INFO [StoreOpener-abe48f9bc98c5ca19cab453c19cdc726-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region abe48f9bc98c5ca19cab453c19cdc726 columnFamilyName fam1 2023-07-13 15:16:29,562 INFO [StoreOpener-abe48f9bc98c5ca19cab453c19cdc726-1] regionserver.HStore(310): Store=abe48f9bc98c5ca19cab453c19cdc726/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:29,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:29,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:29,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:29,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:29,568 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened abe48f9bc98c5ca19cab453c19cdc726; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10483247200, jitterRate=-0.023671522736549377}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:29,569 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for abe48f9bc98c5ca19cab453c19cdc726: 2023-07-13 15:16:29,569 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726., pid=18, masterSystemTime=1689261389552 2023-07-13 15:16:29,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:29,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:29,573 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=abe48f9bc98c5ca19cab453c19cdc726, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:29,573 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261389573"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261389573"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261389573"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261389573"}]},"ts":"1689261389573"} 2023-07-13 15:16:29,575 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-13 15:16:29,575 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure abe48f9bc98c5ca19cab453c19cdc726, server=jenkins-hbase4.apache.org,41061,1689261387010 in 174 msec 2023-07-13 15:16:29,577 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-13 15:16:29,577 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=abe48f9bc98c5ca19cab453c19cdc726, ASSIGN in 329 msec 2023-07-13 15:16:29,577 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:29,578 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261389577"}]},"ts":"1689261389577"} 2023-07-13 15:16:29,579 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-13 15:16:29,581 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:29,582 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 378 msec 2023-07-13 15:16:29,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 15:16:29,811 INFO [Listener at localhost/34081] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-13 15:16:29,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:29,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-13 15:16:29,816 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:29,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-13 15:16:29,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 15:16:29,839 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=26 msec 2023-07-13 15:16:29,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 15:16:29,921 INFO [Listener at localhost/34081] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-13 15:16:29,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:29,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:29,923 INFO [Listener at localhost/34081] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-13 15:16:29,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-13 15:16:29,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-13 15:16:29,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 15:16:29,927 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261389927"}]},"ts":"1689261389927"} 2023-07-13 15:16:29,928 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-13 15:16:29,930 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-13 15:16:29,930 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=abe48f9bc98c5ca19cab453c19cdc726, UNASSIGN}] 2023-07-13 15:16:29,931 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=abe48f9bc98c5ca19cab453c19cdc726, UNASSIGN 2023-07-13 15:16:29,931 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=abe48f9bc98c5ca19cab453c19cdc726, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:29,931 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261389931"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261389931"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261389931"}]},"ts":"1689261389931"} 2023-07-13 15:16:29,932 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure abe48f9bc98c5ca19cab453c19cdc726, server=jenkins-hbase4.apache.org,41061,1689261387010}] 2023-07-13 15:16:30,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 15:16:30,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:30,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing abe48f9bc98c5ca19cab453c19cdc726, disabling compactions & flushes 2023-07-13 15:16:30,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:30,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:30,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. after waiting 0 ms 2023-07-13 15:16:30,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:30,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:30,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726. 2023-07-13 15:16:30,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for abe48f9bc98c5ca19cab453c19cdc726: 2023-07-13 15:16:30,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:30,092 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=abe48f9bc98c5ca19cab453c19cdc726, regionState=CLOSED 2023-07-13 15:16:30,092 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261390091"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261390091"}]},"ts":"1689261390091"} 2023-07-13 15:16:30,094 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-13 15:16:30,094 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure abe48f9bc98c5ca19cab453c19cdc726, server=jenkins-hbase4.apache.org,41061,1689261387010 in 161 msec 2023-07-13 15:16:30,099 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-13 15:16:30,099 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=abe48f9bc98c5ca19cab453c19cdc726, UNASSIGN in 164 msec 2023-07-13 15:16:30,099 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261390099"}]},"ts":"1689261390099"} 2023-07-13 15:16:30,101 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-13 15:16:30,102 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-13 15:16:30,107 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 179 msec 2023-07-13 15:16:30,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 15:16:30,229 INFO [Listener at localhost/34081] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-13 15:16:30,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-13 15:16:30,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-13 15:16:30,232 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 15:16:30,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-13 15:16:30,232 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 15:16:30,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:30,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:30,236 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:30,238 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726/fam1, FileablePath, hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726/recovered.edits] 2023-07-13 15:16:30,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-13 15:16:30,243 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726/recovered.edits/4.seqid to hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/archive/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726/recovered.edits/4.seqid 2023-07-13 15:16:30,243 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/.tmp/data/np1/table1/abe48f9bc98c5ca19cab453c19cdc726 2023-07-13 15:16:30,244 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-13 15:16:30,246 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 15:16:30,247 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-13 15:16:30,248 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-13 15:16:30,249 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 15:16:30,249 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-13 15:16:30,249 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261390249"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:30,252 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:30,252 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => abe48f9bc98c5ca19cab453c19cdc726, NAME => 'np1:table1,,1689261389203.abe48f9bc98c5ca19cab453c19cdc726.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:30,252 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-13 15:16:30,252 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261390252"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:30,253 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-13 15:16:30,259 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 15:16:30,260 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 30 msec 2023-07-13 15:16:30,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-13 15:16:30,341 INFO [Listener at localhost/34081] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-13 15:16:30,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-13 15:16:30,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-13 15:16:30,357 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 15:16:30,361 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 15:16:30,364 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 15:16:30,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 15:16:30,365 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-13 15:16:30,366 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:30,366 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 15:16:30,370 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 15:16:30,371 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 23 msec 2023-07-13 15:16:30,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37719] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 15:16:30,466 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 15:16:30,466 INFO [Listener at localhost/34081] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 15:16:30,466 DEBUG [Listener at localhost/34081] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x277c98ff to 127.0.0.1:59953 2023-07-13 15:16:30,466 DEBUG [Listener at localhost/34081] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:30,466 DEBUG [Listener at localhost/34081] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 15:16:30,466 DEBUG [Listener at localhost/34081] util.JVMClusterUtil(257): Found active master hash=928928786, stopped=false 2023-07-13 15:16:30,466 DEBUG [Listener at localhost/34081] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:30,466 DEBUG [Listener at localhost/34081] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:30,467 DEBUG [Listener at localhost/34081] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-13 15:16:30,467 INFO [Listener at localhost/34081] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:30,468 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:30,468 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:30,468 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:30,468 INFO [Listener at localhost/34081] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 15:16:30,468 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:30,468 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:30,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:30,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:30,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:30,473 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:30,473 DEBUG [Listener at localhost/34081] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2cad65ac to 127.0.0.1:59953 2023-07-13 15:16:30,473 DEBUG [Listener at localhost/34081] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:30,473 INFO [Listener at localhost/34081] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41061,1689261387010' ***** 2023-07-13 15:16:30,473 INFO [Listener at localhost/34081] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:30,473 INFO [Listener at localhost/34081] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45979,1689261387219' ***** 2023-07-13 15:16:30,474 INFO [Listener at localhost/34081] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:30,474 INFO [Listener at localhost/34081] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42367,1689261387430' ***** 2023-07-13 15:16:30,474 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:30,474 INFO [Listener at localhost/34081] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:30,474 INFO [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:30,475 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:30,491 INFO [RS:0;jenkins-hbase4:41061] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@59f5ce37{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:30,491 INFO [RS:2;jenkins-hbase4:42367] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@54e4bf8{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:30,491 INFO [RS:1;jenkins-hbase4:45979] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1fa6830d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:30,492 INFO [RS:0;jenkins-hbase4:41061] server.AbstractConnector(383): Stopped ServerConnector@23cf60b9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:30,492 INFO [RS:1;jenkins-hbase4:45979] server.AbstractConnector(383): Stopped ServerConnector@1a24290f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:30,492 INFO [RS:2;jenkins-hbase4:42367] server.AbstractConnector(383): Stopped ServerConnector@28bc48f0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:30,492 INFO [RS:1;jenkins-hbase4:45979] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:30,492 INFO [RS:0;jenkins-hbase4:41061] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:30,492 INFO [RS:2;jenkins-hbase4:42367] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:30,495 INFO [RS:1;jenkins-hbase4:45979] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@321cd692{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:30,495 INFO [RS:2;jenkins-hbase4:42367] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2f728d8a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:30,495 INFO [RS:0;jenkins-hbase4:41061] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@204cfa25{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:30,495 INFO [RS:2;jenkins-hbase4:42367] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@36cdc80c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:30,495 INFO [RS:0;jenkins-hbase4:41061] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3ee8c6a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:30,495 INFO [RS:1;jenkins-hbase4:45979] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@30ad32c0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:30,496 INFO [RS:2;jenkins-hbase4:42367] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:30,496 INFO [RS:2;jenkins-hbase4:42367] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:30,496 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:30,496 INFO [RS:2;jenkins-hbase4:42367] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:30,496 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(3305): Received CLOSE for 20d883a40da87a6f7c37515b6a04598b 2023-07-13 15:16:30,497 INFO [RS:0;jenkins-hbase4:41061] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:30,497 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(3305): Received CLOSE for e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:30,497 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:30,498 INFO [RS:0;jenkins-hbase4:41061] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:30,498 INFO [RS:0;jenkins-hbase4:41061] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:30,497 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:30,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 20d883a40da87a6f7c37515b6a04598b, disabling compactions & flushes 2023-07-13 15:16:30,498 INFO [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:30,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:30,498 DEBUG [RS:2;jenkins-hbase4:42367] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x24e91c32 to 127.0.0.1:59953 2023-07-13 15:16:30,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:30,499 DEBUG [RS:0;jenkins-hbase4:41061] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2babce8f to 127.0.0.1:59953 2023-07-13 15:16:30,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. after waiting 0 ms 2023-07-13 15:16:30,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:30,499 INFO [RS:1;jenkins-hbase4:45979] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:30,499 DEBUG [RS:2;jenkins-hbase4:42367] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:30,500 INFO [RS:2;jenkins-hbase4:42367] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:30,500 INFO [RS:2;jenkins-hbase4:42367] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:30,500 INFO [RS:2;jenkins-hbase4:42367] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:30,500 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:30,500 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 15:16:30,499 DEBUG [RS:0;jenkins-hbase4:41061] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:30,500 INFO [RS:1;jenkins-hbase4:45979] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:30,501 INFO [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41061,1689261387010; all regions closed. 2023-07-13 15:16:30,501 DEBUG [RS:0;jenkins-hbase4:41061] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 15:16:30,501 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-13 15:16:30,501 INFO [RS:1;jenkins-hbase4:45979] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:30,501 DEBUG [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 20d883a40da87a6f7c37515b6a04598b=hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b., e0f2e1746ce9f2ed0f4c6f079fbb7e4f=hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f.} 2023-07-13 15:16:30,501 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(3305): Received CLOSE for 241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:30,501 DEBUG [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1504): Waiting on 1588230740, 20d883a40da87a6f7c37515b6a04598b, e0f2e1746ce9f2ed0f4c6f079fbb7e4f 2023-07-13 15:16:30,503 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:30,503 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:30,503 DEBUG [RS:1;jenkins-hbase4:45979] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0067e945 to 127.0.0.1:59953 2023-07-13 15:16:30,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:30,503 DEBUG [RS:1;jenkins-hbase4:45979] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:30,503 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:30,504 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 15:16:30,506 DEBUG [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1478): Online Regions={241df9834cd5e4d861b355d94be84e0f=hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f.} 2023-07-13 15:16:30,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 241df9834cd5e4d861b355d94be84e0f, disabling compactions & flushes 2023-07-13 15:16:30,504 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:30,506 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:30,506 DEBUG [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1504): Waiting on 241df9834cd5e4d861b355d94be84e0f 2023-07-13 15:16:30,506 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-13 15:16:30,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:30,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:30,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. after waiting 0 ms 2023-07-13 15:16:30,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:30,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 241df9834cd5e4d861b355d94be84e0f 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-13 15:16:30,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/quota/20d883a40da87a6f7c37515b6a04598b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:30,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:30,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 20d883a40da87a6f7c37515b6a04598b: 2023-07-13 15:16:30,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689261388921.20d883a40da87a6f7c37515b6a04598b. 2023-07-13 15:16:30,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e0f2e1746ce9f2ed0f4c6f079fbb7e4f, disabling compactions & flushes 2023-07-13 15:16:30,511 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:30,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:30,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. after waiting 0 ms 2023-07-13 15:16:30,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:30,511 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e0f2e1746ce9f2ed0f4c6f079fbb7e4f 1/1 column families, dataSize=633 B heapSize=1.09 KB 2023-07-13 15:16:30,520 DEBUG [RS:0;jenkins-hbase4:41061] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/oldWALs 2023-07-13 15:16:30,520 INFO [RS:0;jenkins-hbase4:41061] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41061%2C1689261387010:(num 1689261388163) 2023-07-13 15:16:30,520 DEBUG [RS:0;jenkins-hbase4:41061] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:30,520 INFO [RS:0;jenkins-hbase4:41061] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:30,526 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:30,528 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:30,528 INFO [RS:0;jenkins-hbase4:41061] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:30,528 INFO [RS:0;jenkins-hbase4:41061] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:30,528 INFO [RS:0;jenkins-hbase4:41061] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:30,529 INFO [RS:0;jenkins-hbase4:41061] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:30,528 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:30,529 INFO [RS:0;jenkins-hbase4:41061] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41061 2023-07-13 15:16:30,528 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:30,537 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/.tmp/info/e89894177b994fe3acf107ff56ca4c4a 2023-07-13 15:16:30,537 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:30,537 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:30,537 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:30,537 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:30,537 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:30,537 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41061,1689261387010 2023-07-13 15:16:30,537 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:30,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f/.tmp/info/889335c15efc41fc92e9df47dc805e21 2023-07-13 15:16:30,550 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e89894177b994fe3acf107ff56ca4c4a 2023-07-13 15:16:30,551 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41061,1689261387010] 2023-07-13 15:16:30,551 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41061,1689261387010; numProcessing=1 2023-07-13 15:16:30,553 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41061,1689261387010 already deleted, retry=false 2023-07-13 15:16:30,553 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41061,1689261387010 expired; onlineServers=2 2023-07-13 15:16:30,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 889335c15efc41fc92e9df47dc805e21 2023-07-13 15:16:30,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f/.tmp/info/889335c15efc41fc92e9df47dc805e21 as hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f/info/889335c15efc41fc92e9df47dc805e21 2023-07-13 15:16:30,572 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 889335c15efc41fc92e9df47dc805e21 2023-07-13 15:16:30,572 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f/info/889335c15efc41fc92e9df47dc805e21, entries=3, sequenceid=8, filesize=5.0 K 2023-07-13 15:16:30,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=633 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f/.tmp/m/ce63be3970fb4bfdbda9593f5b554c15 2023-07-13 15:16:30,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 241df9834cd5e4d861b355d94be84e0f in 69ms, sequenceid=8, compaction requested=false 2023-07-13 15:16:30,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 15:16:30,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f/.tmp/m/ce63be3970fb4bfdbda9593f5b554c15 as hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f/m/ce63be3970fb4bfdbda9593f5b554c15 2023-07-13 15:16:30,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/namespace/241df9834cd5e4d861b355d94be84e0f/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-13 15:16:30,589 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:30,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 241df9834cd5e4d861b355d94be84e0f: 2023-07-13 15:16:30,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689261388338.241df9834cd5e4d861b355d94be84e0f. 2023-07-13 15:16:30,591 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/.tmp/rep_barrier/fc16e7320dcd4a38952817610c5f8155 2023-07-13 15:16:30,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f/m/ce63be3970fb4bfdbda9593f5b554c15, entries=1, sequenceid=7, filesize=4.9 K 2023-07-13 15:16:30,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~633 B/633, heapSize ~1.07 KB/1096, currentSize=0 B/0 for e0f2e1746ce9f2ed0f4c6f079fbb7e4f in 82ms, sequenceid=7, compaction requested=false 2023-07-13 15:16:30,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 15:16:30,599 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fc16e7320dcd4a38952817610c5f8155 2023-07-13 15:16:30,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/rsgroup/e0f2e1746ce9f2ed0f4c6f079fbb7e4f/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-13 15:16:30,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:30,603 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:30,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e0f2e1746ce9f2ed0f4c6f079fbb7e4f: 2023-07-13 15:16:30,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689261388504.e0f2e1746ce9f2ed0f4c6f079fbb7e4f. 2023-07-13 15:16:30,611 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/.tmp/table/22d0198b184d4ed4b2dc2dd5054b6a00 2023-07-13 15:16:30,618 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 22d0198b184d4ed4b2dc2dd5054b6a00 2023-07-13 15:16:30,619 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/.tmp/info/e89894177b994fe3acf107ff56ca4c4a as hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/info/e89894177b994fe3acf107ff56ca4c4a 2023-07-13 15:16:30,625 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e89894177b994fe3acf107ff56ca4c4a 2023-07-13 15:16:30,625 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/info/e89894177b994fe3acf107ff56ca4c4a, entries=32, sequenceid=31, filesize=8.5 K 2023-07-13 15:16:30,626 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/.tmp/rep_barrier/fc16e7320dcd4a38952817610c5f8155 as hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/rep_barrier/fc16e7320dcd4a38952817610c5f8155 2023-07-13 15:16:30,633 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fc16e7320dcd4a38952817610c5f8155 2023-07-13 15:16:30,633 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/rep_barrier/fc16e7320dcd4a38952817610c5f8155, entries=1, sequenceid=31, filesize=4.9 K 2023-07-13 15:16:30,634 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/.tmp/table/22d0198b184d4ed4b2dc2dd5054b6a00 as hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/table/22d0198b184d4ed4b2dc2dd5054b6a00 2023-07-13 15:16:30,639 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 22d0198b184d4ed4b2dc2dd5054b6a00 2023-07-13 15:16:30,639 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/table/22d0198b184d4ed4b2dc2dd5054b6a00, entries=8, sequenceid=31, filesize=5.2 K 2023-07-13 15:16:30,640 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 134ms, sequenceid=31, compaction requested=false 2023-07-13 15:16:30,640 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 15:16:30,651 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-13 15:16:30,652 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:30,652 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:30,652 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:30,652 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:30,668 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:30,668 INFO [RS:0;jenkins-hbase4:41061] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41061,1689261387010; zookeeper connection closed. 2023-07-13 15:16:30,668 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:41061-0x1015f41af0e0001, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:30,669 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@68d34499] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@68d34499 2023-07-13 15:16:30,702 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42367,1689261387430; all regions closed. 2023-07-13 15:16:30,702 DEBUG [RS:2;jenkins-hbase4:42367] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 15:16:30,706 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/WALs/jenkins-hbase4.apache.org,42367,1689261387430/jenkins-hbase4.apache.org%2C42367%2C1689261387430.meta.1689261388281.meta not finished, retry = 0 2023-07-13 15:16:30,706 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45979,1689261387219; all regions closed. 2023-07-13 15:16:30,707 DEBUG [RS:1;jenkins-hbase4:45979] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 15:16:30,714 DEBUG [RS:1;jenkins-hbase4:45979] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/oldWALs 2023-07-13 15:16:30,714 INFO [RS:1;jenkins-hbase4:45979] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45979%2C1689261387219:(num 1689261388163) 2023-07-13 15:16:30,714 DEBUG [RS:1;jenkins-hbase4:45979] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:30,714 INFO [RS:1;jenkins-hbase4:45979] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:30,714 INFO [RS:1;jenkins-hbase4:45979] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:30,714 INFO [RS:1;jenkins-hbase4:45979] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:30,714 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:30,714 INFO [RS:1;jenkins-hbase4:45979] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:30,714 INFO [RS:1;jenkins-hbase4:45979] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:30,715 INFO [RS:1;jenkins-hbase4:45979] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45979 2023-07-13 15:16:30,719 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:30,719 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:30,719 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45979,1689261387219 2023-07-13 15:16:30,721 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45979,1689261387219] 2023-07-13 15:16:30,721 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45979,1689261387219; numProcessing=2 2023-07-13 15:16:30,722 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45979,1689261387219 already deleted, retry=false 2023-07-13 15:16:30,722 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45979,1689261387219 expired; onlineServers=1 2023-07-13 15:16:30,809 DEBUG [RS:2;jenkins-hbase4:42367] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/oldWALs 2023-07-13 15:16:30,809 INFO [RS:2;jenkins-hbase4:42367] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42367%2C1689261387430.meta:.meta(num 1689261388281) 2023-07-13 15:16:30,824 DEBUG [RS:2;jenkins-hbase4:42367] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/oldWALs 2023-07-13 15:16:30,824 INFO [RS:2;jenkins-hbase4:42367] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42367%2C1689261387430:(num 1689261388156) 2023-07-13 15:16:30,824 DEBUG [RS:2;jenkins-hbase4:42367] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:30,824 INFO [RS:2;jenkins-hbase4:42367] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:30,825 INFO [RS:2;jenkins-hbase4:42367] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:30,825 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:30,826 INFO [RS:2;jenkins-hbase4:42367] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42367 2023-07-13 15:16:30,829 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:30,829 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42367,1689261387430 2023-07-13 15:16:30,830 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42367,1689261387430] 2023-07-13 15:16:30,830 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42367,1689261387430; numProcessing=3 2023-07-13 15:16:30,832 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42367,1689261387430 already deleted, retry=false 2023-07-13 15:16:30,832 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42367,1689261387430 expired; onlineServers=0 2023-07-13 15:16:30,832 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37719,1689261386793' ***** 2023-07-13 15:16:30,832 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 15:16:30,832 DEBUG [M:0;jenkins-hbase4:37719] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3361d621, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:30,833 INFO [M:0;jenkins-hbase4:37719] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:30,834 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:30,834 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:30,835 INFO [M:0;jenkins-hbase4:37719] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3362b59a{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:30,836 INFO [M:0;jenkins-hbase4:37719] server.AbstractConnector(383): Stopped ServerConnector@326eddd2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:30,836 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:30,836 INFO [M:0;jenkins-hbase4:37719] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:30,836 INFO [M:0;jenkins-hbase4:37719] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@625ee407{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:30,836 INFO [M:0;jenkins-hbase4:37719] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@44a9cf4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:30,837 INFO [M:0;jenkins-hbase4:37719] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37719,1689261386793 2023-07-13 15:16:30,837 INFO [M:0;jenkins-hbase4:37719] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37719,1689261386793; all regions closed. 2023-07-13 15:16:30,837 DEBUG [M:0;jenkins-hbase4:37719] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:30,837 INFO [M:0;jenkins-hbase4:37719] master.HMaster(1491): Stopping master jetty server 2023-07-13 15:16:30,838 INFO [M:0;jenkins-hbase4:37719] server.AbstractConnector(383): Stopped ServerConnector@3449362{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:30,838 DEBUG [M:0;jenkins-hbase4:37719] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 15:16:30,838 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 15:16:30,838 DEBUG [M:0;jenkins-hbase4:37719] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 15:16:30,838 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261387910] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261387910,5,FailOnTimeoutGroup] 2023-07-13 15:16:30,839 INFO [M:0;jenkins-hbase4:37719] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 15:16:30,840 INFO [M:0;jenkins-hbase4:37719] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 15:16:30,839 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261387910] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261387910,5,FailOnTimeoutGroup] 2023-07-13 15:16:30,840 INFO [M:0;jenkins-hbase4:37719] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:30,841 DEBUG [M:0;jenkins-hbase4:37719] master.HMaster(1512): Stopping service threads 2023-07-13 15:16:30,841 INFO [M:0;jenkins-hbase4:37719] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 15:16:30,841 ERROR [M:0;jenkins-hbase4:37719] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-13 15:16:30,841 INFO [M:0;jenkins-hbase4:37719] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 15:16:30,841 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 15:16:30,842 DEBUG [M:0;jenkins-hbase4:37719] zookeeper.ZKUtil(398): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 15:16:30,842 WARN [M:0;jenkins-hbase4:37719] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 15:16:30,842 INFO [M:0;jenkins-hbase4:37719] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 15:16:30,843 INFO [M:0;jenkins-hbase4:37719] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 15:16:30,843 DEBUG [M:0;jenkins-hbase4:37719] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:16:30,843 INFO [M:0;jenkins-hbase4:37719] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:30,843 DEBUG [M:0;jenkins-hbase4:37719] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:30,843 DEBUG [M:0;jenkins-hbase4:37719] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:16:30,843 DEBUG [M:0;jenkins-hbase4:37719] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:30,843 INFO [M:0;jenkins-hbase4:37719] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.10 KB 2023-07-13 15:16:30,866 INFO [M:0;jenkins-hbase4:37719] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/044c39738b124bfc93b8b048f094bddd 2023-07-13 15:16:30,873 DEBUG [M:0;jenkins-hbase4:37719] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/044c39738b124bfc93b8b048f094bddd as hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/044c39738b124bfc93b8b048f094bddd 2023-07-13 15:16:30,879 INFO [M:0;jenkins-hbase4:37719] regionserver.HStore(1080): Added hdfs://localhost:32909/user/jenkins/test-data/7d6a64cc-dbe4-f04d-f6c5-67fa8069ac19/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/044c39738b124bfc93b8b048f094bddd, entries=24, sequenceid=194, filesize=12.4 K 2023-07-13 15:16:30,881 INFO [M:0;jenkins-hbase4:37719] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95179, heapSize ~109.09 KB/111704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 38ms, sequenceid=194, compaction requested=false 2023-07-13 15:16:30,884 INFO [M:0;jenkins-hbase4:37719] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:30,884 DEBUG [M:0;jenkins-hbase4:37719] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:30,888 INFO [M:0;jenkins-hbase4:37719] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 15:16:30,888 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:30,889 INFO [M:0;jenkins-hbase4:37719] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37719 2023-07-13 15:16:30,891 DEBUG [M:0;jenkins-hbase4:37719] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37719,1689261386793 already deleted, retry=false 2023-07-13 15:16:31,169 INFO [M:0;jenkins-hbase4:37719] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37719,1689261386793; zookeeper connection closed. 2023-07-13 15:16:31,170 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:31,171 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): master:37719-0x1015f41af0e0000, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:31,271 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:31,271 INFO [RS:2;jenkins-hbase4:42367] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42367,1689261387430; zookeeper connection closed. 2023-07-13 15:16:31,271 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:42367-0x1015f41af0e0003, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:31,272 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@e63989a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@e63989a 2023-07-13 15:16:31,371 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:31,371 INFO [RS:1;jenkins-hbase4:45979] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45979,1689261387219; zookeeper connection closed. 2023-07-13 15:16:31,371 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): regionserver:45979-0x1015f41af0e0002, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:31,371 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1ec06b56] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1ec06b56 2023-07-13 15:16:31,372 INFO [Listener at localhost/34081] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-13 15:16:31,372 WARN [Listener at localhost/34081] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:31,389 INFO [Listener at localhost/34081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:31,495 WARN [BP-442916757-172.31.14.131-1689261385892 heartbeating to localhost/127.0.0.1:32909] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:31,495 WARN [BP-442916757-172.31.14.131-1689261385892 heartbeating to localhost/127.0.0.1:32909] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-442916757-172.31.14.131-1689261385892 (Datanode Uuid 76553b21-1526-4e07-b0f1-1119c8d2c999) service to localhost/127.0.0.1:32909 2023-07-13 15:16:31,496 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5/dfs/data/data5/current/BP-442916757-172.31.14.131-1689261385892] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:31,497 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5/dfs/data/data6/current/BP-442916757-172.31.14.131-1689261385892] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:31,499 WARN [Listener at localhost/34081] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:31,514 INFO [Listener at localhost/34081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:31,621 WARN [BP-442916757-172.31.14.131-1689261385892 heartbeating to localhost/127.0.0.1:32909] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:31,621 WARN [BP-442916757-172.31.14.131-1689261385892 heartbeating to localhost/127.0.0.1:32909] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-442916757-172.31.14.131-1689261385892 (Datanode Uuid edc2ad89-12f9-4a10-a912-3af1978a3336) service to localhost/127.0.0.1:32909 2023-07-13 15:16:31,622 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5/dfs/data/data3/current/BP-442916757-172.31.14.131-1689261385892] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:31,622 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5/dfs/data/data4/current/BP-442916757-172.31.14.131-1689261385892] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:31,624 WARN [Listener at localhost/34081] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:31,628 INFO [Listener at localhost/34081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:31,737 WARN [BP-442916757-172.31.14.131-1689261385892 heartbeating to localhost/127.0.0.1:32909] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:31,737 WARN [BP-442916757-172.31.14.131-1689261385892 heartbeating to localhost/127.0.0.1:32909] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-442916757-172.31.14.131-1689261385892 (Datanode Uuid a0dbed22-9ff1-4f8c-b2f9-67796d1c95c2) service to localhost/127.0.0.1:32909 2023-07-13 15:16:31,738 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5/dfs/data/data1/current/BP-442916757-172.31.14.131-1689261385892] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:31,738 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/cluster_e6cc4e5c-e139-1058-2cdb-5e013c9734f5/dfs/data/data2/current/BP-442916757-172.31.14.131-1689261385892] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:31,748 INFO [Listener at localhost/34081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:31,868 INFO [Listener at localhost/34081] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 15:16:31,910 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-13 15:16:31,910 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 15:16:31,910 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.log.dir so I do NOT create it in target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162 2023-07-13 15:16:31,910 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d3ffb94f-0e92-e142-708d-60caa352f1d5/hadoop.tmp.dir so I do NOT create it in target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162 2023-07-13 15:16:31,911 INFO [Listener at localhost/34081] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5, deleteOnExit=true 2023-07-13 15:16:31,911 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 15:16:31,911 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/test.cache.data in system properties and HBase conf 2023-07-13 15:16:31,911 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 15:16:31,911 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir in system properties and HBase conf 2023-07-13 15:16:31,911 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 15:16:31,912 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 15:16:31,912 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 15:16:31,912 DEBUG [Listener at localhost/34081] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 15:16:31,912 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 15:16:31,912 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 15:16:31,913 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 15:16:31,913 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 15:16:31,913 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 15:16:31,913 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 15:16:31,913 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 15:16:31,913 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 15:16:31,913 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 15:16:31,914 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/nfs.dump.dir in system properties and HBase conf 2023-07-13 15:16:31,914 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir in system properties and HBase conf 2023-07-13 15:16:31,914 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 15:16:31,914 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 15:16:31,914 INFO [Listener at localhost/34081] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 15:16:31,919 WARN [Listener at localhost/34081] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 15:16:31,919 WARN [Listener at localhost/34081] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 15:16:31,965 DEBUG [Listener at localhost/34081-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015f41af0e000a, quorum=127.0.0.1:59953, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-13 15:16:31,965 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015f41af0e000a, quorum=127.0.0.1:59953, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-13 15:16:31,983 WARN [Listener at localhost/34081] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:31,986 INFO [Listener at localhost/34081] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:31,995 INFO [Listener at localhost/34081] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir/Jetty_localhost_44839_hdfs____xsa9c8/webapp 2023-07-13 15:16:32,126 INFO [Listener at localhost/34081] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44839 2023-07-13 15:16:32,130 WARN [Listener at localhost/34081] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 15:16:32,130 WARN [Listener at localhost/34081] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 15:16:32,173 WARN [Listener at localhost/45993] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:32,189 WARN [Listener at localhost/45993] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:16:32,191 WARN [Listener at localhost/45993] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:32,192 INFO [Listener at localhost/45993] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:32,201 INFO [Listener at localhost/45993] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir/Jetty_localhost_39959_datanode____.hckh0i/webapp 2023-07-13 15:16:32,293 INFO [Listener at localhost/45993] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39959 2023-07-13 15:16:32,302 WARN [Listener at localhost/44615] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:32,320 WARN [Listener at localhost/44615] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:16:32,322 WARN [Listener at localhost/44615] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:32,323 INFO [Listener at localhost/44615] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:32,327 INFO [Listener at localhost/44615] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir/Jetty_localhost_42927_datanode____rw45bb/webapp 2023-07-13 15:16:32,424 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a7ea4e0d2709e4f: Processing first storage report for DS-adbd2bd2-5352-4846-8c6b-13245b223073 from datanode 89fbb8c5-bfa4-468c-904e-5d6b7588ce61 2023-07-13 15:16:32,425 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a7ea4e0d2709e4f: from storage DS-adbd2bd2-5352-4846-8c6b-13245b223073 node DatanodeRegistration(127.0.0.1:40055, datanodeUuid=89fbb8c5-bfa4-468c-904e-5d6b7588ce61, infoPort=44841, infoSecurePort=0, ipcPort=44615, storageInfo=lv=-57;cid=testClusterID;nsid=1686321942;c=1689261391921), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:32,425 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a7ea4e0d2709e4f: Processing first storage report for DS-bb590bb2-717b-4bb0-a22f-98f6ee8506c5 from datanode 89fbb8c5-bfa4-468c-904e-5d6b7588ce61 2023-07-13 15:16:32,425 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a7ea4e0d2709e4f: from storage DS-bb590bb2-717b-4bb0-a22f-98f6ee8506c5 node DatanodeRegistration(127.0.0.1:40055, datanodeUuid=89fbb8c5-bfa4-468c-904e-5d6b7588ce61, infoPort=44841, infoSecurePort=0, ipcPort=44615, storageInfo=lv=-57;cid=testClusterID;nsid=1686321942;c=1689261391921), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:32,432 INFO [Listener at localhost/44615] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42927 2023-07-13 15:16:32,438 WARN [Listener at localhost/40347] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:32,453 WARN [Listener at localhost/40347] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:16:32,455 WARN [Listener at localhost/40347] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:32,456 INFO [Listener at localhost/40347] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:32,460 INFO [Listener at localhost/40347] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir/Jetty_localhost_41743_datanode____.auan28/webapp 2023-07-13 15:16:32,536 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfa8c14f36b75b9eb: Processing first storage report for DS-499758ac-a945-4960-a7f2-45a0b4b8755e from datanode 63faf62a-acb0-4cf6-9141-36c70b4dfcf1 2023-07-13 15:16:32,536 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfa8c14f36b75b9eb: from storage DS-499758ac-a945-4960-a7f2-45a0b4b8755e node DatanodeRegistration(127.0.0.1:38175, datanodeUuid=63faf62a-acb0-4cf6-9141-36c70b4dfcf1, infoPort=45923, infoSecurePort=0, ipcPort=40347, storageInfo=lv=-57;cid=testClusterID;nsid=1686321942;c=1689261391921), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:32,536 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfa8c14f36b75b9eb: Processing first storage report for DS-dfb51cae-cef3-4c27-af22-6e45cab3e34b from datanode 63faf62a-acb0-4cf6-9141-36c70b4dfcf1 2023-07-13 15:16:32,536 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfa8c14f36b75b9eb: from storage DS-dfb51cae-cef3-4c27-af22-6e45cab3e34b node DatanodeRegistration(127.0.0.1:38175, datanodeUuid=63faf62a-acb0-4cf6-9141-36c70b4dfcf1, infoPort=45923, infoSecurePort=0, ipcPort=40347, storageInfo=lv=-57;cid=testClusterID;nsid=1686321942;c=1689261391921), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 15:16:32,558 INFO [Listener at localhost/40347] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41743 2023-07-13 15:16:32,566 WARN [Listener at localhost/34653] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:32,659 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe5ebf09b2d14b60: Processing first storage report for DS-0d9b8f29-d247-4af5-b040-e25e4625c530 from datanode 2b56587d-1107-4143-a9f6-48517e5dc2eb 2023-07-13 15:16:32,659 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe5ebf09b2d14b60: from storage DS-0d9b8f29-d247-4af5-b040-e25e4625c530 node DatanodeRegistration(127.0.0.1:45055, datanodeUuid=2b56587d-1107-4143-a9f6-48517e5dc2eb, infoPort=37165, infoSecurePort=0, ipcPort=34653, storageInfo=lv=-57;cid=testClusterID;nsid=1686321942;c=1689261391921), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:32,659 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe5ebf09b2d14b60: Processing first storage report for DS-97488bf8-fc79-4f2d-8d79-2464b3425c24 from datanode 2b56587d-1107-4143-a9f6-48517e5dc2eb 2023-07-13 15:16:32,659 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe5ebf09b2d14b60: from storage DS-97488bf8-fc79-4f2d-8d79-2464b3425c24 node DatanodeRegistration(127.0.0.1:45055, datanodeUuid=2b56587d-1107-4143-a9f6-48517e5dc2eb, infoPort=37165, infoSecurePort=0, ipcPort=34653, storageInfo=lv=-57;cid=testClusterID;nsid=1686321942;c=1689261391921), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:32,676 DEBUG [Listener at localhost/34653] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162 2023-07-13 15:16:32,678 INFO [Listener at localhost/34653] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/zookeeper_0, clientPort=54390, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 15:16:32,679 INFO [Listener at localhost/34653] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54390 2023-07-13 15:16:32,679 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:32,680 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:32,700 INFO [Listener at localhost/34653] util.FSUtils(471): Created version file at hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a with version=8 2023-07-13 15:16:32,700 INFO [Listener at localhost/34653] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37375/user/jenkins/test-data/08642bdc-12ce-98de-0989-f649f93f3536/hbase-staging 2023-07-13 15:16:32,700 DEBUG [Listener at localhost/34653] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 15:16:32,701 DEBUG [Listener at localhost/34653] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 15:16:32,701 DEBUG [Listener at localhost/34653] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 15:16:32,701 DEBUG [Listener at localhost/34653] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 15:16:32,702 INFO [Listener at localhost/34653] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:32,702 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:32,702 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:32,702 INFO [Listener at localhost/34653] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:32,702 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:32,702 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:32,702 INFO [Listener at localhost/34653] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:32,703 INFO [Listener at localhost/34653] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41029 2023-07-13 15:16:32,703 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:32,704 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:32,705 INFO [Listener at localhost/34653] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41029 connecting to ZooKeeper ensemble=127.0.0.1:54390 2023-07-13 15:16:32,712 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:410290x0, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:32,713 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41029-0x1015f41c6280000 connected 2023-07-13 15:16:32,730 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:32,731 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:32,731 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:32,734 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41029 2023-07-13 15:16:32,734 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41029 2023-07-13 15:16:32,734 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41029 2023-07-13 15:16:32,737 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41029 2023-07-13 15:16:32,737 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41029 2023-07-13 15:16:32,739 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:32,739 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:32,739 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:32,739 INFO [Listener at localhost/34653] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 15:16:32,739 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:32,740 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:32,740 INFO [Listener at localhost/34653] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:32,740 INFO [Listener at localhost/34653] http.HttpServer(1146): Jetty bound to port 42275 2023-07-13 15:16:32,740 INFO [Listener at localhost/34653] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:32,742 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:32,742 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@558dcba1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:32,743 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:32,743 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ce8454f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:32,855 INFO [Listener at localhost/34653] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:32,856 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:32,856 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:32,857 INFO [Listener at localhost/34653] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:32,858 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:32,859 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@21d993e9{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir/jetty-0_0_0_0-42275-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4746124891648796948/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:32,861 INFO [Listener at localhost/34653] server.AbstractConnector(333): Started ServerConnector@50e3498a{HTTP/1.1, (http/1.1)}{0.0.0.0:42275} 2023-07-13 15:16:32,861 INFO [Listener at localhost/34653] server.Server(415): Started @42941ms 2023-07-13 15:16:32,861 INFO [Listener at localhost/34653] master.HMaster(444): hbase.rootdir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a, hbase.cluster.distributed=false 2023-07-13 15:16:32,876 INFO [Listener at localhost/34653] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:32,876 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:32,877 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:32,877 INFO [Listener at localhost/34653] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:32,877 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:32,877 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:32,877 INFO [Listener at localhost/34653] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:32,877 INFO [Listener at localhost/34653] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35281 2023-07-13 15:16:32,878 INFO [Listener at localhost/34653] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:32,879 DEBUG [Listener at localhost/34653] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:32,879 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:32,880 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:32,881 INFO [Listener at localhost/34653] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35281 connecting to ZooKeeper ensemble=127.0.0.1:54390 2023-07-13 15:16:32,884 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:352810x0, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:32,885 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35281-0x1015f41c6280001 connected 2023-07-13 15:16:32,885 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:32,885 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:32,886 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:32,887 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35281 2023-07-13 15:16:32,887 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35281 2023-07-13 15:16:32,887 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35281 2023-07-13 15:16:32,887 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35281 2023-07-13 15:16:32,888 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35281 2023-07-13 15:16:32,889 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:32,889 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:32,889 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:32,890 INFO [Listener at localhost/34653] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:32,890 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:32,890 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:32,890 INFO [Listener at localhost/34653] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:32,891 INFO [Listener at localhost/34653] http.HttpServer(1146): Jetty bound to port 39815 2023-07-13 15:16:32,891 INFO [Listener at localhost/34653] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:32,904 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:32,905 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@75e5d650{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:32,905 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:32,905 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@295166fc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:33,018 INFO [Listener at localhost/34653] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:33,018 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:33,019 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:33,019 INFO [Listener at localhost/34653] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:33,020 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:33,020 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5d16009b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir/jetty-0_0_0_0-39815-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3146324333996942536/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:33,021 INFO [Listener at localhost/34653] server.AbstractConnector(333): Started ServerConnector@6ba4dce4{HTTP/1.1, (http/1.1)}{0.0.0.0:39815} 2023-07-13 15:16:33,022 INFO [Listener at localhost/34653] server.Server(415): Started @43102ms 2023-07-13 15:16:33,034 INFO [Listener at localhost/34653] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:33,034 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:33,034 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:33,034 INFO [Listener at localhost/34653] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:33,034 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:33,034 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:33,034 INFO [Listener at localhost/34653] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:33,035 INFO [Listener at localhost/34653] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40227 2023-07-13 15:16:33,035 INFO [Listener at localhost/34653] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:33,037 DEBUG [Listener at localhost/34653] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:33,037 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:33,038 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:33,039 INFO [Listener at localhost/34653] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40227 connecting to ZooKeeper ensemble=127.0.0.1:54390 2023-07-13 15:16:33,043 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:402270x0, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:33,045 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40227-0x1015f41c6280002 connected 2023-07-13 15:16:33,045 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:33,046 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:33,046 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:33,046 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40227 2023-07-13 15:16:33,050 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40227 2023-07-13 15:16:33,051 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40227 2023-07-13 15:16:33,051 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40227 2023-07-13 15:16:33,051 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40227 2023-07-13 15:16:33,053 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:33,053 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:33,053 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:33,053 INFO [Listener at localhost/34653] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:33,053 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:33,053 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:33,054 INFO [Listener at localhost/34653] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:33,054 INFO [Listener at localhost/34653] http.HttpServer(1146): Jetty bound to port 41301 2023-07-13 15:16:33,054 INFO [Listener at localhost/34653] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:33,057 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:33,057 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@288689f8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:33,058 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:33,058 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@42cd0009{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:33,171 INFO [Listener at localhost/34653] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:33,171 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:33,171 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:33,172 INFO [Listener at localhost/34653] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:33,172 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:33,173 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1f4b851f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir/jetty-0_0_0_0-41301-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5519438644893465720/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:33,175 INFO [Listener at localhost/34653] server.AbstractConnector(333): Started ServerConnector@6d04da20{HTTP/1.1, (http/1.1)}{0.0.0.0:41301} 2023-07-13 15:16:33,175 INFO [Listener at localhost/34653] server.Server(415): Started @43255ms 2023-07-13 15:16:33,187 INFO [Listener at localhost/34653] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:33,187 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:33,187 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:33,187 INFO [Listener at localhost/34653] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:33,187 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:33,187 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:33,187 INFO [Listener at localhost/34653] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:33,189 INFO [Listener at localhost/34653] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44715 2023-07-13 15:16:33,190 INFO [Listener at localhost/34653] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:33,191 DEBUG [Listener at localhost/34653] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:33,191 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:33,193 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:33,194 INFO [Listener at localhost/34653] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44715 connecting to ZooKeeper ensemble=127.0.0.1:54390 2023-07-13 15:16:33,197 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:447150x0, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:33,198 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:447150x0, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:33,199 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44715-0x1015f41c6280003 connected 2023-07-13 15:16:33,199 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:33,202 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:33,206 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44715 2023-07-13 15:16:33,206 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44715 2023-07-13 15:16:33,207 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44715 2023-07-13 15:16:33,207 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44715 2023-07-13 15:16:33,207 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44715 2023-07-13 15:16:33,209 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:33,210 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:33,210 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:33,210 INFO [Listener at localhost/34653] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:33,211 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:33,211 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:33,211 INFO [Listener at localhost/34653] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:33,211 INFO [Listener at localhost/34653] http.HttpServer(1146): Jetty bound to port 35053 2023-07-13 15:16:33,212 INFO [Listener at localhost/34653] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:33,219 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:33,219 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@51b99b82{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:33,219 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:33,219 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@736db136{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:33,334 INFO [Listener at localhost/34653] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:33,335 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:33,335 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:33,335 INFO [Listener at localhost/34653] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:33,336 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:33,337 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@72c3dca2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir/jetty-0_0_0_0-35053-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8755377819345701403/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:33,338 INFO [Listener at localhost/34653] server.AbstractConnector(333): Started ServerConnector@72f6fde0{HTTP/1.1, (http/1.1)}{0.0.0.0:35053} 2023-07-13 15:16:33,338 INFO [Listener at localhost/34653] server.Server(415): Started @43419ms 2023-07-13 15:16:33,340 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:33,343 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@617ef308{HTTP/1.1, (http/1.1)}{0.0.0.0:42771} 2023-07-13 15:16:33,343 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43423ms 2023-07-13 15:16:33,343 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:33,345 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:33,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:33,346 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:33,346 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:33,346 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:33,347 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:33,346 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:33,348 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:33,350 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41029,1689261392701 from backup master directory 2023-07-13 15:16:33,350 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:33,352 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:33,352 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:33,352 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:33,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:33,367 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/hbase.id with ID: 7374015d-a1a9-49c6-8fc0-4e69b86857d5 2023-07-13 15:16:33,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:33,387 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:33,402 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x40c8b085 to 127.0.0.1:54390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:33,407 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3dd1b7cc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:33,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:33,408 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 15:16:33,408 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:33,410 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/data/master/store-tmp 2023-07-13 15:16:33,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:33,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:16:33,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:33,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:33,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:16:33,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:33,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:33,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:33,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/WALs/jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:33,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41029%2C1689261392701, suffix=, logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/WALs/jenkins-hbase4.apache.org,41029,1689261392701, archiveDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/oldWALs, maxLogs=10 2023-07-13 15:16:33,439 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK] 2023-07-13 15:16:33,440 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK] 2023-07-13 15:16:33,440 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK] 2023-07-13 15:16:33,442 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/WALs/jenkins-hbase4.apache.org,41029,1689261392701/jenkins-hbase4.apache.org%2C41029%2C1689261392701.1689261393424 2023-07-13 15:16:33,442 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK], DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK]] 2023-07-13 15:16:33,442 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:33,443 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:33,443 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:33,443 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:33,445 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:33,446 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 15:16:33,446 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 15:16:33,447 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:33,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:33,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:33,450 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:33,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:33,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9968126400, jitterRate=-0.07164588570594788}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:33,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:33,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 15:16:33,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 15:16:33,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 15:16:33,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 15:16:33,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-13 15:16:33,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-13 15:16:33,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 15:16:33,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 15:16:33,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 15:16:33,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 15:16:33,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 15:16:33,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 15:16:33,458 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:33,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 15:16:33,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 15:16:33,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 15:16:33,462 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:33,462 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:33,462 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:33,462 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:33,463 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:33,463 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41029,1689261392701, sessionid=0x1015f41c6280000, setting cluster-up flag (Was=false) 2023-07-13 15:16:33,467 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:33,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 15:16:33,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:33,476 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:33,481 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 15:16:33,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:33,483 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.hbase-snapshot/.tmp 2023-07-13 15:16:33,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 15:16:33,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 15:16:33,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 15:16:33,484 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:33,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 15:16:33,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:33,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:33,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:33,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:33,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:33,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:33,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:33,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:33,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:33,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-13 15:16:33,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:33,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689261423497 2023-07-13 15:16:33,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 15:16:33,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 15:16:33,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 15:16:33,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 15:16:33,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 15:16:33,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 15:16:33,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,498 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:33,498 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 15:16:33,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 15:16:33,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 15:16:33,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 15:16:33,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 15:16:33,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 15:16:33,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261393499,5,FailOnTimeoutGroup] 2023-07-13 15:16:33,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261393499,5,FailOnTimeoutGroup] 2023-07-13 15:16:33,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 15:16:33,499 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:33,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,512 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:33,513 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:33,513 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a 2023-07-13 15:16:33,523 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:33,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:33,527 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/info 2023-07-13 15:16:33,528 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:33,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:33,529 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:33,530 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:33,531 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:33,531 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:33,532 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:33,533 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/table 2023-07-13 15:16:33,533 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:33,534 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:33,535 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740 2023-07-13 15:16:33,536 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740 2023-07-13 15:16:33,543 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(951): ClusterId : 7374015d-a1a9-49c6-8fc0-4e69b86857d5 2023-07-13 15:16:33,544 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:33,546 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:33,549 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(951): ClusterId : 7374015d-a1a9-49c6-8fc0-4e69b86857d5 2023-07-13 15:16:33,550 DEBUG [RS:0;jenkins-hbase4:35281] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:33,551 DEBUG [RS:1;jenkins-hbase4:40227] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:33,551 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:33,552 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11186670720, jitterRate=0.04183989763259888}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:33,552 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:33,552 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:33,552 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:33,552 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:33,552 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:33,552 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:33,554 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(951): ClusterId : 7374015d-a1a9-49c6-8fc0-4e69b86857d5 2023-07-13 15:16:33,554 DEBUG [RS:0;jenkins-hbase4:35281] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:33,555 DEBUG [RS:0;jenkins-hbase4:35281] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:33,555 DEBUG [RS:1;jenkins-hbase4:40227] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:33,556 DEBUG [RS:1;jenkins-hbase4:40227] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:33,558 DEBUG [RS:0;jenkins-hbase4:35281] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:33,558 DEBUG [RS:1;jenkins-hbase4:40227] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:33,560 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:33,560 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:33,561 DEBUG [RS:2;jenkins-hbase4:44715] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:33,562 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:33,562 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 15:16:33,562 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 15:16:33,562 DEBUG [RS:0;jenkins-hbase4:35281] zookeeper.ReadOnlyZKClient(139): Connect 0x02803a6e to 127.0.0.1:54390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:33,563 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 15:16:33,564 DEBUG [RS:2;jenkins-hbase4:44715] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:33,564 DEBUG [RS:2;jenkins-hbase4:44715] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:33,567 DEBUG [RS:2;jenkins-hbase4:44715] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:33,577 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 15:16:33,580 DEBUG [RS:2;jenkins-hbase4:44715] zookeeper.ReadOnlyZKClient(139): Connect 0x4e929c0a to 127.0.0.1:54390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:33,580 DEBUG [RS:1;jenkins-hbase4:40227] zookeeper.ReadOnlyZKClient(139): Connect 0x08ead63f to 127.0.0.1:54390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:33,601 DEBUG [RS:0;jenkins-hbase4:35281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c47d44e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:33,601 DEBUG [RS:0;jenkins-hbase4:35281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@878511e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:33,607 DEBUG [RS:2;jenkins-hbase4:44715] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cfb26ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:33,607 DEBUG [RS:2;jenkins-hbase4:44715] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f96371c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:33,607 DEBUG [RS:1;jenkins-hbase4:40227] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4de84946, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:33,607 DEBUG [RS:1;jenkins-hbase4:40227] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a876d71, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:33,617 DEBUG [RS:0;jenkins-hbase4:35281] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35281 2023-07-13 15:16:33,617 INFO [RS:0;jenkins-hbase4:35281] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:33,617 INFO [RS:0;jenkins-hbase4:35281] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:33,617 DEBUG [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:33,618 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41029,1689261392701 with isa=jenkins-hbase4.apache.org/172.31.14.131:35281, startcode=1689261392876 2023-07-13 15:16:33,618 DEBUG [RS:0;jenkins-hbase4:35281] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:33,619 DEBUG [RS:2;jenkins-hbase4:44715] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44715 2023-07-13 15:16:33,619 INFO [RS:2;jenkins-hbase4:44715] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:33,619 DEBUG [RS:1;jenkins-hbase4:40227] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40227 2023-07-13 15:16:33,619 INFO [RS:2;jenkins-hbase4:44715] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:33,619 DEBUG [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:33,619 INFO [RS:1;jenkins-hbase4:40227] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:33,619 INFO [RS:1;jenkins-hbase4:40227] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:33,619 DEBUG [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:33,620 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41029,1689261392701 with isa=jenkins-hbase4.apache.org/172.31.14.131:44715, startcode=1689261393186 2023-07-13 15:16:33,620 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41029,1689261392701 with isa=jenkins-hbase4.apache.org/172.31.14.131:40227, startcode=1689261393033 2023-07-13 15:16:33,620 DEBUG [RS:2;jenkins-hbase4:44715] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:33,620 DEBUG [RS:1;jenkins-hbase4:40227] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:33,633 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50545, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:33,633 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45793, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:33,633 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56785, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:33,637 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41029] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:33,638 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:33,638 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 15:16:33,638 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41029] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,638 DEBUG [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a 2023-07-13 15:16:33,639 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:33,639 DEBUG [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45993 2023-07-13 15:16:33,639 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 15:16:33,639 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41029] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:33,639 DEBUG [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42275 2023-07-13 15:16:33,639 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:33,639 DEBUG [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a 2023-07-13 15:16:33,639 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 15:16:33,639 DEBUG [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45993 2023-07-13 15:16:33,639 DEBUG [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a 2023-07-13 15:16:33,639 DEBUG [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42275 2023-07-13 15:16:33,639 DEBUG [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45993 2023-07-13 15:16:33,640 DEBUG [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42275 2023-07-13 15:16:33,646 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:33,653 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35281,1689261392876] 2023-07-13 15:16:33,653 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40227,1689261393033] 2023-07-13 15:16:33,653 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44715,1689261393186] 2023-07-13 15:16:33,653 DEBUG [RS:0;jenkins-hbase4:35281] zookeeper.ZKUtil(162): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:33,653 DEBUG [RS:1;jenkins-hbase4:40227] zookeeper.ZKUtil(162): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:33,653 WARN [RS:0;jenkins-hbase4:35281] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:33,653 WARN [RS:1;jenkins-hbase4:40227] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:33,653 INFO [RS:0;jenkins-hbase4:35281] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:33,653 DEBUG [RS:2;jenkins-hbase4:44715] zookeeper.ZKUtil(162): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,653 DEBUG [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:33,653 INFO [RS:1;jenkins-hbase4:40227] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:33,654 WARN [RS:2;jenkins-hbase4:44715] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:33,654 DEBUG [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:33,654 INFO [RS:2;jenkins-hbase4:44715] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:33,654 DEBUG [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,680 DEBUG [RS:0;jenkins-hbase4:35281] zookeeper.ZKUtil(162): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:33,681 DEBUG [RS:0;jenkins-hbase4:35281] zookeeper.ZKUtil(162): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:33,681 DEBUG [RS:0;jenkins-hbase4:35281] zookeeper.ZKUtil(162): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,682 DEBUG [RS:0;jenkins-hbase4:35281] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:33,682 INFO [RS:0;jenkins-hbase4:35281] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:33,684 DEBUG [RS:1;jenkins-hbase4:40227] zookeeper.ZKUtil(162): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:33,684 DEBUG [RS:1;jenkins-hbase4:40227] zookeeper.ZKUtil(162): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:33,684 DEBUG [RS:1;jenkins-hbase4:40227] zookeeper.ZKUtil(162): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,685 DEBUG [RS:1;jenkins-hbase4:40227] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:33,685 INFO [RS:1;jenkins-hbase4:40227] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:33,691 INFO [RS:0;jenkins-hbase4:35281] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:33,691 DEBUG [RS:2;jenkins-hbase4:44715] zookeeper.ZKUtil(162): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:33,692 DEBUG [RS:2;jenkins-hbase4:44715] zookeeper.ZKUtil(162): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:33,692 DEBUG [RS:2;jenkins-hbase4:44715] zookeeper.ZKUtil(162): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,693 DEBUG [RS:2;jenkins-hbase4:44715] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:33,693 INFO [RS:2;jenkins-hbase4:44715] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:33,699 INFO [RS:0;jenkins-hbase4:35281] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:33,699 INFO [RS:0;jenkins-hbase4:35281] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,700 INFO [RS:2;jenkins-hbase4:44715] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:33,700 INFO [RS:1;jenkins-hbase4:40227] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:33,700 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:33,701 INFO [RS:2;jenkins-hbase4:44715] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:33,701 INFO [RS:2;jenkins-hbase4:44715] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,702 INFO [RS:1;jenkins-hbase4:40227] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:33,702 INFO [RS:1;jenkins-hbase4:40227] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,702 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:33,707 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:33,707 INFO [RS:0;jenkins-hbase4:35281] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,709 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,709 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,709 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,709 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,709 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,709 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:33,709 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,709 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,709 INFO [RS:2;jenkins-hbase4:44715] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,709 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,709 INFO [RS:1;jenkins-hbase4:40227] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,710 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,710 DEBUG [RS:0;jenkins-hbase4:35281] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,711 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,711 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,711 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,711 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,711 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,711 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,711 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:33,712 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:33,712 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:2;jenkins-hbase4:44715] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,712 DEBUG [RS:1;jenkins-hbase4:40227] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:33,714 INFO [RS:0;jenkins-hbase4:35281] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,715 INFO [RS:0;jenkins-hbase4:35281] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,715 INFO [RS:0;jenkins-hbase4:35281] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,724 INFO [RS:1;jenkins-hbase4:40227] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,724 INFO [RS:1;jenkins-hbase4:40227] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,727 INFO [RS:2;jenkins-hbase4:44715] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,727 INFO [RS:1;jenkins-hbase4:40227] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,728 DEBUG [jenkins-hbase4:41029] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 15:16:33,727 INFO [RS:2;jenkins-hbase4:44715] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,728 INFO [RS:2;jenkins-hbase4:44715] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,728 DEBUG [jenkins-hbase4:41029] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:33,728 DEBUG [jenkins-hbase4:41029] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:33,728 DEBUG [jenkins-hbase4:41029] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:33,728 DEBUG [jenkins-hbase4:41029] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:33,728 DEBUG [jenkins-hbase4:41029] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:33,731 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44715,1689261393186, state=OPENING 2023-07-13 15:16:33,733 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 15:16:33,735 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:33,736 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:33,736 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44715,1689261393186}] 2023-07-13 15:16:33,742 INFO [RS:1;jenkins-hbase4:40227] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:33,742 INFO [RS:1;jenkins-hbase4:40227] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40227,1689261393033-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,744 INFO [RS:0;jenkins-hbase4:35281] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:33,744 INFO [RS:0;jenkins-hbase4:35281] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35281,1689261392876-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,749 INFO [RS:2;jenkins-hbase4:44715] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:33,749 INFO [RS:2;jenkins-hbase4:44715] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44715,1689261393186-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,754 INFO [RS:1;jenkins-hbase4:40227] regionserver.Replication(203): jenkins-hbase4.apache.org,40227,1689261393033 started 2023-07-13 15:16:33,754 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40227,1689261393033, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40227, sessionid=0x1015f41c6280002 2023-07-13 15:16:33,754 DEBUG [RS:1;jenkins-hbase4:40227] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:33,754 DEBUG [RS:1;jenkins-hbase4:40227] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:33,754 DEBUG [RS:1;jenkins-hbase4:40227] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40227,1689261393033' 2023-07-13 15:16:33,754 DEBUG [RS:1;jenkins-hbase4:40227] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:33,755 DEBUG [RS:1;jenkins-hbase4:40227] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:33,755 DEBUG [RS:1;jenkins-hbase4:40227] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:33,755 DEBUG [RS:1;jenkins-hbase4:40227] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:33,755 DEBUG [RS:1;jenkins-hbase4:40227] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:33,755 DEBUG [RS:1;jenkins-hbase4:40227] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40227,1689261393033' 2023-07-13 15:16:33,755 DEBUG [RS:1;jenkins-hbase4:40227] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:33,755 DEBUG [RS:1;jenkins-hbase4:40227] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:33,756 DEBUG [RS:1;jenkins-hbase4:40227] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:33,756 INFO [RS:0;jenkins-hbase4:35281] regionserver.Replication(203): jenkins-hbase4.apache.org,35281,1689261392876 started 2023-07-13 15:16:33,756 INFO [RS:1;jenkins-hbase4:40227] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:33,756 INFO [RS:1;jenkins-hbase4:40227] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:33,756 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35281,1689261392876, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35281, sessionid=0x1015f41c6280001 2023-07-13 15:16:33,756 DEBUG [RS:0;jenkins-hbase4:35281] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:33,756 DEBUG [RS:0;jenkins-hbase4:35281] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:33,756 DEBUG [RS:0;jenkins-hbase4:35281] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35281,1689261392876' 2023-07-13 15:16:33,756 DEBUG [RS:0;jenkins-hbase4:35281] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:33,757 DEBUG [RS:0;jenkins-hbase4:35281] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:33,757 DEBUG [RS:0;jenkins-hbase4:35281] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:33,757 DEBUG [RS:0;jenkins-hbase4:35281] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:33,757 DEBUG [RS:0;jenkins-hbase4:35281] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:33,757 DEBUG [RS:0;jenkins-hbase4:35281] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35281,1689261392876' 2023-07-13 15:16:33,757 DEBUG [RS:0;jenkins-hbase4:35281] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:33,757 DEBUG [RS:0;jenkins-hbase4:35281] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:33,758 DEBUG [RS:0;jenkins-hbase4:35281] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:33,758 INFO [RS:0;jenkins-hbase4:35281] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:33,758 INFO [RS:0;jenkins-hbase4:35281] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:33,762 INFO [RS:2;jenkins-hbase4:44715] regionserver.Replication(203): jenkins-hbase4.apache.org,44715,1689261393186 started 2023-07-13 15:16:33,762 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44715,1689261393186, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44715, sessionid=0x1015f41c6280003 2023-07-13 15:16:33,762 DEBUG [RS:2;jenkins-hbase4:44715] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:33,762 DEBUG [RS:2;jenkins-hbase4:44715] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,763 DEBUG [RS:2;jenkins-hbase4:44715] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44715,1689261393186' 2023-07-13 15:16:33,763 DEBUG [RS:2;jenkins-hbase4:44715] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:33,763 DEBUG [RS:2;jenkins-hbase4:44715] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:33,763 DEBUG [RS:2;jenkins-hbase4:44715] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:33,763 DEBUG [RS:2;jenkins-hbase4:44715] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:33,763 DEBUG [RS:2;jenkins-hbase4:44715] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,763 DEBUG [RS:2;jenkins-hbase4:44715] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44715,1689261393186' 2023-07-13 15:16:33,763 DEBUG [RS:2;jenkins-hbase4:44715] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:33,764 DEBUG [RS:2;jenkins-hbase4:44715] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:33,764 DEBUG [RS:2;jenkins-hbase4:44715] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:33,764 INFO [RS:2;jenkins-hbase4:44715] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:33,764 INFO [RS:2;jenkins-hbase4:44715] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:33,800 WARN [ReadOnlyZKClient-127.0.0.1:54390@0x40c8b085] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 15:16:33,801 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41029,1689261392701] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:33,802 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35366, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:33,802 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-13 15:16:33,802 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44715] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:35366 deadline: 1689261453802, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,859 INFO [RS:1;jenkins-hbase4:40227] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40227%2C1689261393033, suffix=, logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,40227,1689261393033, archiveDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs, maxLogs=32 2023-07-13 15:16:33,860 INFO [RS:0;jenkins-hbase4:35281] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35281%2C1689261392876, suffix=, logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,35281,1689261392876, archiveDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs, maxLogs=32 2023-07-13 15:16:33,867 INFO [RS:2;jenkins-hbase4:44715] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44715%2C1689261393186, suffix=, logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,44715,1689261393186, archiveDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs, maxLogs=32 2023-07-13 15:16:33,887 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK] 2023-07-13 15:16:33,887 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK] 2023-07-13 15:16:33,889 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK] 2023-07-13 15:16:33,889 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK] 2023-07-13 15:16:33,891 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK] 2023-07-13 15:16:33,891 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK] 2023-07-13 15:16:33,891 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:33,899 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK] 2023-07-13 15:16:33,899 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:33,899 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK] 2023-07-13 15:16:33,900 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK] 2023-07-13 15:16:33,901 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35382, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:33,902 INFO [RS:1;jenkins-hbase4:40227] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,40227,1689261393033/jenkins-hbase4.apache.org%2C40227%2C1689261393033.1689261393859 2023-07-13 15:16:33,902 INFO [RS:0;jenkins-hbase4:35281] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,35281,1689261392876/jenkins-hbase4.apache.org%2C35281%2C1689261392876.1689261393861 2023-07-13 15:16:33,902 DEBUG [RS:1;jenkins-hbase4:40227] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK], DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK], DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK]] 2023-07-13 15:16:33,902 DEBUG [RS:0;jenkins-hbase4:35281] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK], DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK], DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK]] 2023-07-13 15:16:33,903 INFO [RS:2;jenkins-hbase4:44715] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,44715,1689261393186/jenkins-hbase4.apache.org%2C44715%2C1689261393186.1689261393867 2023-07-13 15:16:33,903 DEBUG [RS:2;jenkins-hbase4:44715] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK], DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK], DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK]] 2023-07-13 15:16:33,909 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 15:16:33,909 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:33,911 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44715%2C1689261393186.meta, suffix=.meta, logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,44715,1689261393186, archiveDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs, maxLogs=32 2023-07-13 15:16:33,924 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK] 2023-07-13 15:16:33,924 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK] 2023-07-13 15:16:33,925 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK] 2023-07-13 15:16:33,927 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,44715,1689261393186/jenkins-hbase4.apache.org%2C44715%2C1689261393186.meta.1689261393911.meta 2023-07-13 15:16:33,927 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK], DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK], DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK]] 2023-07-13 15:16:33,927 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:33,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:33,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 15:16:33,928 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 15:16:33,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 15:16:33,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:33,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 15:16:33,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 15:16:33,929 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:33,930 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/info 2023-07-13 15:16:33,930 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/info 2023-07-13 15:16:33,931 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:33,931 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:33,931 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:33,932 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:33,932 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:33,932 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:33,933 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:33,933 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:33,934 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/table 2023-07-13 15:16:33,934 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/table 2023-07-13 15:16:33,934 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:33,934 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:33,935 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740 2023-07-13 15:16:33,936 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740 2023-07-13 15:16:33,938 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:33,939 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:33,940 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10612850080, jitterRate=-0.011601313948631287}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:33,940 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:33,940 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689261393891 2023-07-13 15:16:33,944 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 15:16:33,945 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 15:16:33,945 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44715,1689261393186, state=OPEN 2023-07-13 15:16:33,946 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:33,946 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:33,948 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 15:16:33,948 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44715,1689261393186 in 210 msec 2023-07-13 15:16:33,949 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 15:16:33,949 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 386 msec 2023-07-13 15:16:33,951 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 466 msec 2023-07-13 15:16:33,951 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689261393951, completionTime=-1 2023-07-13 15:16:33,951 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 15:16:33,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 15:16:33,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 15:16:33,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689261453960 2023-07-13 15:16:33,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689261513960 2023-07-13 15:16:33,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-07-13 15:16:33,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41029,1689261392701-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41029,1689261392701-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41029,1689261392701-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41029, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:33,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 15:16:33,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:33,967 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 15:16:33,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 15:16:33,968 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:33,969 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:33,971 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:33,971 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a empty. 2023-07-13 15:16:33,971 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:33,972 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 15:16:33,985 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:33,986 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 33adbfeda53f12cfeeea717c33fa723a, NAME => 'hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp 2023-07-13 15:16:33,994 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:33,994 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 33adbfeda53f12cfeeea717c33fa723a, disabling compactions & flushes 2023-07-13 15:16:33,994 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:33,994 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:33,994 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. after waiting 0 ms 2023-07-13 15:16:33,994 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:33,994 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:33,994 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 33adbfeda53f12cfeeea717c33fa723a: 2023-07-13 15:16:33,996 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:33,997 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261393997"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261393997"}]},"ts":"1689261393997"} 2023-07-13 15:16:33,999 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:33,999 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:34,000 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261394000"}]},"ts":"1689261394000"} 2023-07-13 15:16:34,000 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 15:16:34,005 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:34,005 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:34,005 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:34,005 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:34,005 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:34,005 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=33adbfeda53f12cfeeea717c33fa723a, ASSIGN}] 2023-07-13 15:16:34,006 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=33adbfeda53f12cfeeea717c33fa723a, ASSIGN 2023-07-13 15:16:34,007 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=33adbfeda53f12cfeeea717c33fa723a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35281,1689261392876; forceNewPlan=false, retain=false 2023-07-13 15:16:34,104 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41029,1689261392701] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:34,106 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41029,1689261392701] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 15:16:34,107 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:34,108 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:34,109 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:34,110 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86 empty. 2023-07-13 15:16:34,110 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:34,110 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 15:16:34,121 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:34,122 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a934e69702da3551c62dbdada49afb86, NAME => 'hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp 2023-07-13 15:16:34,130 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:34,130 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing a934e69702da3551c62dbdada49afb86, disabling compactions & flushes 2023-07-13 15:16:34,130 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:34,130 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:34,130 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. after waiting 0 ms 2023-07-13 15:16:34,130 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:34,130 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:34,130 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for a934e69702da3551c62dbdada49afb86: 2023-07-13 15:16:34,132 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:34,133 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261394133"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261394133"}]},"ts":"1689261394133"} 2023-07-13 15:16:34,134 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:34,135 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:34,135 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261394135"}]},"ts":"1689261394135"} 2023-07-13 15:16:34,136 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 15:16:34,139 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:34,139 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:34,139 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:34,139 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:34,139 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:34,139 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a934e69702da3551c62dbdada49afb86, ASSIGN}] 2023-07-13 15:16:34,140 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a934e69702da3551c62dbdada49afb86, ASSIGN 2023-07-13 15:16:34,140 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a934e69702da3551c62dbdada49afb86, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40227,1689261393033; forceNewPlan=false, retain=false 2023-07-13 15:16:34,141 INFO [jenkins-hbase4:41029] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-13 15:16:34,143 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=33adbfeda53f12cfeeea717c33fa723a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:34,143 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261394142"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261394142"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261394142"}]},"ts":"1689261394142"} 2023-07-13 15:16:34,143 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a934e69702da3551c62dbdada49afb86, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:34,143 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261394143"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261394143"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261394143"}]},"ts":"1689261394143"} 2023-07-13 15:16:34,144 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 33adbfeda53f12cfeeea717c33fa723a, server=jenkins-hbase4.apache.org,35281,1689261392876}] 2023-07-13 15:16:34,144 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure a934e69702da3551c62dbdada49afb86, server=jenkins-hbase4.apache.org,40227,1689261393033}] 2023-07-13 15:16:34,297 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:34,297 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:34,297 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:34,298 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:34,299 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37218, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:34,299 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47460, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:34,303 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:34,303 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a934e69702da3551c62dbdada49afb86, NAME => 'hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 33adbfeda53f12cfeeea717c33fa723a, NAME => 'hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. service=MultiRowMutationService 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:34,303 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:34,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:34,305 INFO [StoreOpener-33adbfeda53f12cfeeea717c33fa723a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:34,305 INFO [StoreOpener-a934e69702da3551c62dbdada49afb86-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:34,306 DEBUG [StoreOpener-33adbfeda53f12cfeeea717c33fa723a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a/info 2023-07-13 15:16:34,306 DEBUG [StoreOpener-33adbfeda53f12cfeeea717c33fa723a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a/info 2023-07-13 15:16:34,306 DEBUG [StoreOpener-a934e69702da3551c62dbdada49afb86-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86/m 2023-07-13 15:16:34,306 DEBUG [StoreOpener-a934e69702da3551c62dbdada49afb86-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86/m 2023-07-13 15:16:34,306 INFO [StoreOpener-33adbfeda53f12cfeeea717c33fa723a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 33adbfeda53f12cfeeea717c33fa723a columnFamilyName info 2023-07-13 15:16:34,306 INFO [StoreOpener-a934e69702da3551c62dbdada49afb86-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a934e69702da3551c62dbdada49afb86 columnFamilyName m 2023-07-13 15:16:34,307 INFO [StoreOpener-33adbfeda53f12cfeeea717c33fa723a-1] regionserver.HStore(310): Store=33adbfeda53f12cfeeea717c33fa723a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:34,307 INFO [StoreOpener-a934e69702da3551c62dbdada49afb86-1] regionserver.HStore(310): Store=a934e69702da3551c62dbdada49afb86/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:34,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:34,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:34,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:34,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:34,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:34,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:34,315 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:34,315 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:34,315 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 33adbfeda53f12cfeeea717c33fa723a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11859095680, jitterRate=0.10446435213088989}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:34,315 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a934e69702da3551c62dbdada49afb86; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2d3e42c3, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:34,315 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 33adbfeda53f12cfeeea717c33fa723a: 2023-07-13 15:16:34,316 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a934e69702da3551c62dbdada49afb86: 2023-07-13 15:16:34,316 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a., pid=8, masterSystemTime=1689261394297 2023-07-13 15:16:34,317 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86., pid=9, masterSystemTime=1689261394297 2023-07-13 15:16:34,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:34,322 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:34,322 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=33adbfeda53f12cfeeea717c33fa723a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:34,322 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261394322"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261394322"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261394322"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261394322"}]},"ts":"1689261394322"} 2023-07-13 15:16:34,322 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:34,323 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:34,323 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a934e69702da3551c62dbdada49afb86, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:34,324 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261394323"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261394323"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261394323"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261394323"}]},"ts":"1689261394323"} 2023-07-13 15:16:34,326 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-13 15:16:34,326 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 33adbfeda53f12cfeeea717c33fa723a, server=jenkins-hbase4.apache.org,35281,1689261392876 in 180 msec 2023-07-13 15:16:34,326 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-13 15:16:34,326 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure a934e69702da3551c62dbdada49afb86, server=jenkins-hbase4.apache.org,40227,1689261393033 in 181 msec 2023-07-13 15:16:34,328 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-13 15:16:34,328 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-13 15:16:34,328 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=33adbfeda53f12cfeeea717c33fa723a, ASSIGN in 321 msec 2023-07-13 15:16:34,328 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a934e69702da3551c62dbdada49afb86, ASSIGN in 187 msec 2023-07-13 15:16:34,328 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:34,328 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:34,328 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261394328"}]},"ts":"1689261394328"} 2023-07-13 15:16:34,328 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261394328"}]},"ts":"1689261394328"} 2023-07-13 15:16:34,329 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 15:16:34,330 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 15:16:34,334 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:34,337 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:34,338 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 230 msec 2023-07-13 15:16:34,338 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 371 msec 2023-07-13 15:16:34,368 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 15:16:34,370 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:34,370 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:34,373 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:34,374 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47472, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:34,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 15:16:34,384 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:34,390 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-13 15:16:34,398 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 15:16:34,404 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:34,407 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-13 15:16:34,408 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41029,1689261392701] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:34,409 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37220, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:34,411 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 15:16:34,411 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 15:16:34,415 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 15:16:34,416 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:34,416 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:34,418 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 15:16:34,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.066sec 2023-07-13 15:16:34,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-13 15:16:34,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 15:16:34,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 15:16:34,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41029,1689261392701-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 15:16:34,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41029,1689261392701-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 15:16:34,419 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 15:16:34,419 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:34,421 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 15:16:34,451 DEBUG [Listener at localhost/34653] zookeeper.ReadOnlyZKClient(139): Connect 0x4875d50c to 127.0.0.1:54390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:34,457 DEBUG [Listener at localhost/34653] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@357edf8c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:34,459 DEBUG [hconnection-0x5934234d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:34,461 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35384, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:34,462 INFO [Listener at localhost/34653] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:34,462 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:34,467 DEBUG [Listener at localhost/34653] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 15:16:34,469 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35058, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 15:16:34,473 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 15:16:34,473 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:34,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-13 15:16:34,474 DEBUG [Listener at localhost/34653] zookeeper.ReadOnlyZKClient(139): Connect 0x575b56c6 to 127.0.0.1:54390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:34,482 DEBUG [Listener at localhost/34653] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f0d392b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:34,482 INFO [Listener at localhost/34653] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54390 2023-07-13 15:16:34,486 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:34,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:34,489 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015f41c628000a connected 2023-07-13 15:16:34,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:34,492 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 15:16:34,504 INFO [Listener at localhost/34653] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:34,504 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:34,504 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:34,504 INFO [Listener at localhost/34653] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:34,505 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:34,505 INFO [Listener at localhost/34653] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:34,505 INFO [Listener at localhost/34653] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:34,505 INFO [Listener at localhost/34653] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43281 2023-07-13 15:16:34,506 INFO [Listener at localhost/34653] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:34,507 DEBUG [Listener at localhost/34653] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:34,508 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:34,508 INFO [Listener at localhost/34653] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:34,509 INFO [Listener at localhost/34653] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43281 connecting to ZooKeeper ensemble=127.0.0.1:54390 2023-07-13 15:16:34,513 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:432810x0, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:34,516 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(162): regionserver:432810x0, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:34,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43281-0x1015f41c628000b connected 2023-07-13 15:16:34,517 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(162): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 15:16:34,518 DEBUG [Listener at localhost/34653] zookeeper.ZKUtil(164): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:34,518 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43281 2023-07-13 15:16:34,519 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43281 2023-07-13 15:16:34,519 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43281 2023-07-13 15:16:34,519 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43281 2023-07-13 15:16:34,519 DEBUG [Listener at localhost/34653] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43281 2023-07-13 15:16:34,521 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:34,522 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:34,522 INFO [Listener at localhost/34653] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:34,522 INFO [Listener at localhost/34653] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:34,522 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:34,523 INFO [Listener at localhost/34653] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:34,523 INFO [Listener at localhost/34653] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:34,523 INFO [Listener at localhost/34653] http.HttpServer(1146): Jetty bound to port 44437 2023-07-13 15:16:34,523 INFO [Listener at localhost/34653] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:34,525 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:34,525 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d3c5a39{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:34,525 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:34,525 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d37a2b0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:34,649 INFO [Listener at localhost/34653] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:34,650 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:34,650 INFO [Listener at localhost/34653] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:34,650 INFO [Listener at localhost/34653] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:34,651 INFO [Listener at localhost/34653] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:34,652 INFO [Listener at localhost/34653] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3e8cf2fc{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/java.io.tmpdir/jetty-0_0_0_0-44437-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3461785118054823782/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:34,654 INFO [Listener at localhost/34653] server.AbstractConnector(333): Started ServerConnector@134f4aae{HTTP/1.1, (http/1.1)}{0.0.0.0:44437} 2023-07-13 15:16:34,654 INFO [Listener at localhost/34653] server.Server(415): Started @44734ms 2023-07-13 15:16:34,657 INFO [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(951): ClusterId : 7374015d-a1a9-49c6-8fc0-4e69b86857d5 2023-07-13 15:16:34,658 DEBUG [RS:3;jenkins-hbase4:43281] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:34,661 DEBUG [RS:3;jenkins-hbase4:43281] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:34,661 DEBUG [RS:3;jenkins-hbase4:43281] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:34,663 DEBUG [RS:3;jenkins-hbase4:43281] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:34,663 DEBUG [RS:3;jenkins-hbase4:43281] zookeeper.ReadOnlyZKClient(139): Connect 0x139f189c to 127.0.0.1:54390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:34,669 DEBUG [RS:3;jenkins-hbase4:43281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d58f49f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:34,669 DEBUG [RS:3;jenkins-hbase4:43281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@196773ec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:34,679 DEBUG [RS:3;jenkins-hbase4:43281] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43281 2023-07-13 15:16:34,679 INFO [RS:3;jenkins-hbase4:43281] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:34,679 INFO [RS:3;jenkins-hbase4:43281] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:34,679 DEBUG [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:34,680 INFO [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41029,1689261392701 with isa=jenkins-hbase4.apache.org/172.31.14.131:43281, startcode=1689261394504 2023-07-13 15:16:34,680 DEBUG [RS:3;jenkins-hbase4:43281] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:34,682 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52551, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:34,683 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41029] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:34,683 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:34,683 DEBUG [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a 2023-07-13 15:16:34,683 DEBUG [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45993 2023-07-13 15:16:34,683 DEBUG [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42275 2023-07-13 15:16:34,690 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:34,690 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:34,690 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:34,690 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:34,690 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:34,690 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:34,691 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:34,691 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43281,1689261394504] 2023-07-13 15:16:34,692 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 15:16:34,692 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:34,692 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:34,692 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:34,694 DEBUG [RS:3;jenkins-hbase4:43281] zookeeper.ZKUtil(162): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:34,695 WARN [RS:3;jenkins-hbase4:43281] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:34,695 INFO [RS:3;jenkins-hbase4:43281] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:34,695 DEBUG [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:34,695 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:34,695 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:34,695 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:34,695 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:34,698 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:34,698 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:34,698 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:34,698 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:34,701 DEBUG [RS:3;jenkins-hbase4:43281] zookeeper.ZKUtil(162): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:34,701 DEBUG [RS:3;jenkins-hbase4:43281] zookeeper.ZKUtil(162): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:34,702 DEBUG [RS:3;jenkins-hbase4:43281] zookeeper.ZKUtil(162): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:34,702 DEBUG [RS:3;jenkins-hbase4:43281] zookeeper.ZKUtil(162): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:34,703 DEBUG [RS:3;jenkins-hbase4:43281] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:34,703 INFO [RS:3;jenkins-hbase4:43281] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:34,704 INFO [RS:3;jenkins-hbase4:43281] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:34,704 INFO [RS:3;jenkins-hbase4:43281] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:34,704 INFO [RS:3;jenkins-hbase4:43281] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:34,705 INFO [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:34,706 INFO [RS:3;jenkins-hbase4:43281] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:34,707 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:34,707 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:34,707 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:34,707 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:34,707 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:34,707 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:34,707 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:34,707 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:34,708 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:34,708 DEBUG [RS:3;jenkins-hbase4:43281] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:34,708 INFO [RS:3;jenkins-hbase4:43281] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:34,709 INFO [RS:3;jenkins-hbase4:43281] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:34,709 INFO [RS:3;jenkins-hbase4:43281] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:34,724 INFO [RS:3;jenkins-hbase4:43281] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:34,724 INFO [RS:3;jenkins-hbase4:43281] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43281,1689261394504-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:34,738 INFO [RS:3;jenkins-hbase4:43281] regionserver.Replication(203): jenkins-hbase4.apache.org,43281,1689261394504 started 2023-07-13 15:16:34,739 INFO [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43281,1689261394504, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43281, sessionid=0x1015f41c628000b 2023-07-13 15:16:34,739 DEBUG [RS:3;jenkins-hbase4:43281] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:34,739 DEBUG [RS:3;jenkins-hbase4:43281] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:34,739 DEBUG [RS:3;jenkins-hbase4:43281] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43281,1689261394504' 2023-07-13 15:16:34,739 DEBUG [RS:3;jenkins-hbase4:43281] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:34,739 DEBUG [RS:3;jenkins-hbase4:43281] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:34,740 DEBUG [RS:3;jenkins-hbase4:43281] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:34,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:34,740 DEBUG [RS:3;jenkins-hbase4:43281] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:34,740 DEBUG [RS:3;jenkins-hbase4:43281] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:34,740 DEBUG [RS:3;jenkins-hbase4:43281] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43281,1689261394504' 2023-07-13 15:16:34,740 DEBUG [RS:3;jenkins-hbase4:43281] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:34,740 DEBUG [RS:3;jenkins-hbase4:43281] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:34,741 DEBUG [RS:3;jenkins-hbase4:43281] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:34,741 INFO [RS:3;jenkins-hbase4:43281] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:34,741 INFO [RS:3;jenkins-hbase4:43281] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:34,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:34,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:34,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:34,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:34,747 DEBUG [hconnection-0x552ab686-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:34,749 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35400, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:34,753 DEBUG [hconnection-0x552ab686-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:34,754 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37224, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:34,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:34,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:34,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41029] to rsgroup master 2023-07-13 15:16:34,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:34,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35058 deadline: 1689262594759, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. 2023-07-13 15:16:34,760 WARN [Listener at localhost/34653] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:34,761 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:34,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:34,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:34,762 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35281, jenkins-hbase4.apache.org:40227, jenkins-hbase4.apache.org:43281, jenkins-hbase4.apache.org:44715], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:34,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:34,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:34,812 INFO [Listener at localhost/34653] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=564 (was 516) Potentially hanging thread: qtp1874399375-2329-acceptor-0@27a65c76-ServerConnector@617ef308{HTTP/1.1, (http/1.1)}{0.0.0.0:42771} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@1706e5e4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@4f269b40 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x02803a6e-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@443e44f8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 40347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2320 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp912985556-2598 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x575b56c6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x4875d50c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/34653-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp774357224-2285-acceptor-0@2b9354f2-ServerConnector@6d04da20{HTTP/1.1, (http/1.1)}{0.0.0.0:41301} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData-prefix:jenkins-hbase4.apache.org,41029,1689261392701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1414997309_17 at /127.0.0.1:35774 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5934234d-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2594 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x4e929c0a-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:40227-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 44615 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@758ff98d[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x02803a6e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/55670216.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@26442371 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1975456767-172.31.14.131-1689261391921 heartbeating to localhost/127.0.0.1:45993 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2599 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x40c8b085 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/55670216.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/34653.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server handler 4 on default port 40347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp353748819-2229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:32909 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:32909 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 267518829@qtp-883992593-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39959 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1874399375-2325 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1760076879@qtp-1707088403-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_86069843_17 at /127.0.0.1:33436 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 45993 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp774357224-2291 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 908384914@qtp-883992593-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:45993 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-74237991_17 at /127.0.0.1:35808 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2601 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-183143962_17 at /127.0.0.1:38252 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp912985556-2597 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:44715-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 749260906@qtp-1334008484-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a-prefix:jenkins-hbase4.apache.org,44715,1689261393186.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x19c02ef4-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 40347 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins@localhost:45993 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data3/current/BP-1975456767-172.31.14.131-1689261391921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x40c8b085-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 44615 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:40227Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS:3;jenkins-hbase4:43281-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:44715 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2314-acceptor-0@3f1bb05f-ServerConnector@72f6fde0{HTTP/1.1, (http/1.1)}{0.0.0.0:35053} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x139f189c-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x02803a6e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 1 on default port 44615 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:45993 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1874399375-2326 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 45993 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/34081-SendThread(127.0.0.1:59953) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: IPC Server handler 4 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1916015537-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x552ab686-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@24c740ab java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353748819-2223 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x08ead63f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/55670216.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1102316695@qtp-1334008484-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41743 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: IPC Server handler 0 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1975456767-172.31.14.131-1689261391921 heartbeating to localhost/127.0.0.1:45993 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x19c02ef4-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp912985556-2596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x575b56c6-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x4875d50c-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1916015537-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 45993 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:45993 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41029,1689261392701 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: 1790959576@qtp-924848190-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42927 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-74237991_17 at /127.0.0.1:38246 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x4e929c0a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261393499 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d7abbee java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 44615 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x19c02ef4-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp774357224-2284 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x4e929c0a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/55670216.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4de090ae java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x575b56c6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/55670216.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 40347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/34653-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp774357224-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1916015537-2254 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x08ead63f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 1 on default port 45993 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1680326028@qtp-924848190-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1874399375-2332 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:54390 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_86069843_17 at /127.0.0.1:38254 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp353748819-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35281Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43281Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x19c02ef4-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:32909 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp774357224-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x40c8b085-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-5ca65b20-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59953@0x548b4587-SendThread(127.0.0.1:59953) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: IPC Server handler 1 on default port 40347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261393499 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1414997309_17 at /127.0.0.1:38212 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5af53e15-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_86069843_17 at /127.0.0.1:33460 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data5/current/BP-1975456767-172.31.14.131-1689261391921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1874399375-2327 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x19c02ef4-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 45993 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:32909 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-183143962_17 at /127.0.0.1:33422 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353748819-2224-acceptor-0@55219348-ServerConnector@50e3498a{HTTP/1.1, (http/1.1)}{0.0.0.0:42275} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x139f189c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/55670216.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:45993 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp912985556-2600 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data2/current/BP-1975456767-172.31.14.131-1689261391921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ProcessThread(sid:0 cport:54390): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a-prefix:jenkins-hbase4.apache.org,44715,1689261393186 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:41029 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1916015537-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data4/current/BP-1975456767-172.31.14.131-1689261391921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59953@0x548b4587 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/55670216.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2315 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43281 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data6/current/BP-1975456767-172.31.14.131-1689261391921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:45993 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_86069843_17 at /127.0.0.1:38270 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:45993 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1414997309_17 at /127.0.0.1:38176 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_86069843_17 at /127.0.0.1:35840 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1874399375-2331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1916015537-2260 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 44615 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 2 on default port 45993 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp353748819-2230 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-183143962_17 at /127.0.0.1:35820 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp774357224-2288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353748819-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:45993 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a-prefix:jenkins-hbase4.apache.org,35281,1689261392876 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34081-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-74237991_17 at /127.0.0.1:33450 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:35281 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:45993 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_86069843_17 at /127.0.0.1:35834 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp774357224-2289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@57bb2bf3[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@59835f98 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 2058842548@qtp-1707088403-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44839 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59953@0x548b4587-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 4 on default port 44615 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@42043e9b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x19c02ef4-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x552ab686-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:40227 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2319 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-74237991_17 at /127.0.0.1:33408 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp353748819-2228 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:45993 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp976075850-2321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2595-acceptor-0@66ecec9c-ServerConnector@134f4aae{HTTP/1.1, (http/1.1)}{0.0.0.0:44437} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x08ead63f-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:32909 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp774357224-2290 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1874399375-2328 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/266250767.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp353748819-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1874399375-2330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:32909 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1916015537-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x19c02ef4-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:32909 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1916015537-2255-acceptor-0@106abe91-ServerConnector@6ba4dce4{HTTP/1.1, (http/1.1)}{0.0.0.0:39815} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 40347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: CacheReplicationMonitor(24806112) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: Session-HouseKeeper-508d202c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a-prefix:jenkins-hbase4.apache.org,40227,1689261393033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-239ed2ad-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1975456767-172.31.14.131-1689261391921 heartbeating to localhost/127.0.0.1:45993 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1975456767-172.31.14.131-1689261391921:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1916015537-2261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2318 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37719,1689261386793 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:32909 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1062445414) connection to localhost/127.0.0.1:32909 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@17bb529f sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34653-SendThread(127.0.0.1:54390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4037ed3e[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@69bdef2a sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@66ba58da java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x4875d50c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/55670216.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44715Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data1/current/BP-1975456767-172.31.14.131-1689261391921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:35281-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34653 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1414997309_17 at /127.0.0.1:33384 [Receiving block BP-1975456767-172.31.14.131-1689261391921:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5daf6e09 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x19c02ef4-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-183143962_17 at /127.0.0.1:35746 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3ff34e7b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54390@0x139f189c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) - Thread LEAK? -, OpenFileDescriptor=834 (was 800) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=411 (was 459), ProcessCount=172 (was 172), AvailableMemoryMB=3890 (was 4116) 2023-07-13 15:16:34,815 WARN [Listener at localhost/34653] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-13 15:16:34,832 INFO [Listener at localhost/34653] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=564, OpenFileDescriptor=834, MaxFileDescriptor=60000, SystemLoadAverage=411, ProcessCount=172, AvailableMemoryMB=3890 2023-07-13 15:16:34,832 WARN [Listener at localhost/34653] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-13 15:16:34,832 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-13 15:16:34,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:34,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:34,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:34,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:34,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:34,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:34,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:34,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:34,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:34,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:34,843 INFO [RS:3;jenkins-hbase4:43281] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43281%2C1689261394504, suffix=, logDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,43281,1689261394504, archiveDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs, maxLogs=32 2023-07-13 15:16:34,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:34,847 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:34,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:34,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:34,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:34,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:34,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:34,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:34,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:34,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41029] to rsgroup master 2023-07-13 15:16:34,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:34,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35058 deadline: 1689262594859, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. 2023-07-13 15:16:34,861 WARN [Listener at localhost/34653] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:34,865 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:34,865 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK] 2023-07-13 15:16:34,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:34,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:34,866 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35281, jenkins-hbase4.apache.org:40227, jenkins-hbase4.apache.org:43281, jenkins-hbase4.apache.org:44715], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:34,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:34,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:34,869 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK] 2023-07-13 15:16:34,869 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK] 2023-07-13 15:16:34,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:34,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-13 15:16:34,876 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:34,876 INFO [RS:3;jenkins-hbase4:43281] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/WALs/jenkins-hbase4.apache.org,43281,1689261394504/jenkins-hbase4.apache.org%2C43281%2C1689261394504.1689261394843 2023-07-13 15:16:34,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-13 15:16:34,876 DEBUG [RS:3;jenkins-hbase4:43281] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38175,DS-499758ac-a945-4960-a7f2-45a0b4b8755e,DISK], DatanodeInfoWithStorage[127.0.0.1:45055,DS-0d9b8f29-d247-4af5-b040-e25e4625c530,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-adbd2bd2-5352-4846-8c6b-13245b223073,DISK]] 2023-07-13 15:16:34,878 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:34,878 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:34,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 15:16:34,879 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:34,881 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:34,882 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/default/t1/40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:34,883 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/default/t1/40bdd8790f22a002c73f0fbd78904b67 empty. 2023-07-13 15:16:34,883 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/default/t1/40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:34,883 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-13 15:16:34,902 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:34,904 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 40bdd8790f22a002c73f0fbd78904b67, NAME => 't1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp 2023-07-13 15:16:34,915 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:34,916 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 40bdd8790f22a002c73f0fbd78904b67, disabling compactions & flushes 2023-07-13 15:16:34,916 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:34,916 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:34,916 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. after waiting 0 ms 2023-07-13 15:16:34,916 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:34,916 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:34,916 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 40bdd8790f22a002c73f0fbd78904b67: 2023-07-13 15:16:34,918 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:34,919 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261394919"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261394919"}]},"ts":"1689261394919"} 2023-07-13 15:16:34,920 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:34,921 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:34,921 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261394921"}]},"ts":"1689261394921"} 2023-07-13 15:16:34,922 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-13 15:16:34,930 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:34,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:34,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:34,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:34,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 15:16:34,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:34,931 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=40bdd8790f22a002c73f0fbd78904b67, ASSIGN}] 2023-07-13 15:16:34,932 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=40bdd8790f22a002c73f0fbd78904b67, ASSIGN 2023-07-13 15:16:34,932 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=40bdd8790f22a002c73f0fbd78904b67, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43281,1689261394504; forceNewPlan=false, retain=false 2023-07-13 15:16:34,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 15:16:35,083 INFO [jenkins-hbase4:41029] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:35,084 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=40bdd8790f22a002c73f0fbd78904b67, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:35,084 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261395084"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261395084"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261395084"}]},"ts":"1689261395084"} 2023-07-13 15:16:35,086 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 40bdd8790f22a002c73f0fbd78904b67, server=jenkins-hbase4.apache.org,43281,1689261394504}] 2023-07-13 15:16:35,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 15:16:35,238 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:35,239 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:35,240 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46908, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:35,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:35,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 40bdd8790f22a002c73f0fbd78904b67, NAME => 't1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:35,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:35,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,245 INFO [StoreOpener-40bdd8790f22a002c73f0fbd78904b67-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,247 DEBUG [StoreOpener-40bdd8790f22a002c73f0fbd78904b67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/default/t1/40bdd8790f22a002c73f0fbd78904b67/cf1 2023-07-13 15:16:35,247 DEBUG [StoreOpener-40bdd8790f22a002c73f0fbd78904b67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/default/t1/40bdd8790f22a002c73f0fbd78904b67/cf1 2023-07-13 15:16:35,247 INFO [StoreOpener-40bdd8790f22a002c73f0fbd78904b67-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 40bdd8790f22a002c73f0fbd78904b67 columnFamilyName cf1 2023-07-13 15:16:35,248 INFO [StoreOpener-40bdd8790f22a002c73f0fbd78904b67-1] regionserver.HStore(310): Store=40bdd8790f22a002c73f0fbd78904b67/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:35,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/default/t1/40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/default/t1/40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/default/t1/40bdd8790f22a002c73f0fbd78904b67/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:35,254 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 40bdd8790f22a002c73f0fbd78904b67; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10331785440, jitterRate=-0.03777749836444855}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:35,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 40bdd8790f22a002c73f0fbd78904b67: 2023-07-13 15:16:35,254 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67., pid=14, masterSystemTime=1689261395238 2023-07-13 15:16:35,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:35,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:35,259 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=40bdd8790f22a002c73f0fbd78904b67, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:35,259 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261395259"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261395259"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261395259"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261395259"}]},"ts":"1689261395259"} 2023-07-13 15:16:35,261 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-13 15:16:35,261 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 40bdd8790f22a002c73f0fbd78904b67, server=jenkins-hbase4.apache.org,43281,1689261394504 in 174 msec 2023-07-13 15:16:35,263 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-13 15:16:35,263 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=40bdd8790f22a002c73f0fbd78904b67, ASSIGN in 330 msec 2023-07-13 15:16:35,263 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:35,264 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261395264"}]},"ts":"1689261395264"} 2023-07-13 15:16:35,265 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-13 15:16:35,268 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:35,269 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 399 msec 2023-07-13 15:16:35,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 15:16:35,482 INFO [Listener at localhost/34653] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-13 15:16:35,482 DEBUG [Listener at localhost/34653] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-13 15:16:35,482 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:35,484 INFO [Listener at localhost/34653] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-13 15:16:35,484 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:35,484 INFO [Listener at localhost/34653] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-13 15:16:35,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:35,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-13 15:16:35,489 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:35,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-13 15:16:35,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:35058 deadline: 1689261455485, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-13 15:16:35,491 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:35,494 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=7 msec 2023-07-13 15:16:35,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:35,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:35,593 INFO [Listener at localhost/34653] client.HBaseAdmin$15(890): Started disable of t1 2023-07-13 15:16:35,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-13 15:16:35,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-13 15:16:35,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 15:16:35,598 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261395598"}]},"ts":"1689261395598"} 2023-07-13 15:16:35,599 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-13 15:16:35,601 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-13 15:16:35,602 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=40bdd8790f22a002c73f0fbd78904b67, UNASSIGN}] 2023-07-13 15:16:35,602 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=40bdd8790f22a002c73f0fbd78904b67, UNASSIGN 2023-07-13 15:16:35,603 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=40bdd8790f22a002c73f0fbd78904b67, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:35,603 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261395603"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261395603"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261395603"}]},"ts":"1689261395603"} 2023-07-13 15:16:35,604 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 40bdd8790f22a002c73f0fbd78904b67, server=jenkins-hbase4.apache.org,43281,1689261394504}] 2023-07-13 15:16:35,690 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:35,691 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-13 15:16:35,691 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:35,691 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-13 15:16:35,691 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:35,691 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-13 15:16:35,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 15:16:35,757 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,757 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 40bdd8790f22a002c73f0fbd78904b67, disabling compactions & flushes 2023-07-13 15:16:35,757 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:35,757 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:35,757 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. after waiting 0 ms 2023-07-13 15:16:35,757 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:35,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/default/t1/40bdd8790f22a002c73f0fbd78904b67/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:35,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67. 2023-07-13 15:16:35,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 40bdd8790f22a002c73f0fbd78904b67: 2023-07-13 15:16:35,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,763 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=40bdd8790f22a002c73f0fbd78904b67, regionState=CLOSED 2023-07-13 15:16:35,763 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689261395763"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261395763"}]},"ts":"1689261395763"} 2023-07-13 15:16:35,766 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-13 15:16:35,766 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 40bdd8790f22a002c73f0fbd78904b67, server=jenkins-hbase4.apache.org,43281,1689261394504 in 160 msec 2023-07-13 15:16:35,767 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-13 15:16:35,767 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=40bdd8790f22a002c73f0fbd78904b67, UNASSIGN in 164 msec 2023-07-13 15:16:35,768 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261395768"}]},"ts":"1689261395768"} 2023-07-13 15:16:35,769 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-13 15:16:35,770 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-13 15:16:35,772 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 178 msec 2023-07-13 15:16:35,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 15:16:35,900 INFO [Listener at localhost/34653] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-13 15:16:35,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-13 15:16:35,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-13 15:16:35,904 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-13 15:16:35,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-13 15:16:35,904 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-13 15:16:35,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:35,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:35,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:35,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 15:16:35,910 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/default/t1/40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,911 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/default/t1/40bdd8790f22a002c73f0fbd78904b67/cf1, FileablePath, hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/default/t1/40bdd8790f22a002c73f0fbd78904b67/recovered.edits] 2023-07-13 15:16:35,917 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/default/t1/40bdd8790f22a002c73f0fbd78904b67/recovered.edits/4.seqid to hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/archive/data/default/t1/40bdd8790f22a002c73f0fbd78904b67/recovered.edits/4.seqid 2023-07-13 15:16:35,918 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/.tmp/data/default/t1/40bdd8790f22a002c73f0fbd78904b67 2023-07-13 15:16:35,918 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-13 15:16:35,920 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-13 15:16:35,922 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-13 15:16:35,923 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-13 15:16:35,924 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-13 15:16:35,924 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-13 15:16:35,924 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261395924"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:35,926 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:35,926 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 40bdd8790f22a002c73f0fbd78904b67, NAME => 't1,,1689261394868.40bdd8790f22a002c73f0fbd78904b67.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:35,926 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-13 15:16:35,926 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261395926"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:35,927 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-13 15:16:35,929 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-13 15:16:35,930 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 28 msec 2023-07-13 15:16:36,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 15:16:36,010 INFO [Listener at localhost/34653] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-13 15:16:36,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:36,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:36,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:36,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:36,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:36,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:36,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:36,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:36,029 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:36,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:36,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:36,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:36,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41029] to rsgroup master 2023-07-13 15:16:36,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:36,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35058 deadline: 1689262596039, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. 2023-07-13 15:16:36,040 WARN [Listener at localhost/34653] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:36,044 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:36,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,045 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35281, jenkins-hbase4.apache.org:40227, jenkins-hbase4.apache.org:43281, jenkins-hbase4.apache.org:44715], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:36,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:36,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:36,067 INFO [Listener at localhost/34653] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=574 (was 564) - Thread LEAK? -, OpenFileDescriptor=844 (was 834) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=401 (was 411), ProcessCount=172 (was 172), AvailableMemoryMB=3882 (was 3890) 2023-07-13 15:16:36,067 WARN [Listener at localhost/34653] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-13 15:16:36,091 INFO [Listener at localhost/34653] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=574, OpenFileDescriptor=844, MaxFileDescriptor=60000, SystemLoadAverage=401, ProcessCount=172, AvailableMemoryMB=3881 2023-07-13 15:16:36,091 WARN [Listener at localhost/34653] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-13 15:16:36,091 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-13 15:16:36,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:36,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:36,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:36,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:36,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:36,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:36,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:36,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:36,108 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:36,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:36,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:36,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:36,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41029] to rsgroup master 2023-07-13 15:16:36,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:36,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35058 deadline: 1689262596120, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. 2023-07-13 15:16:36,121 WARN [Listener at localhost/34653] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:36,123 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:36,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,124 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35281, jenkins-hbase4.apache.org:40227, jenkins-hbase4.apache.org:43281, jenkins-hbase4.apache.org:44715], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:36,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:36,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:36,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-13 15:16:36,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:36,127 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-13 15:16:36,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-13 15:16:36,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 15:16:36,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:36,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:36,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:36,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:36,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:36,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:36,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:36,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:36,149 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:36,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:36,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:36,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:36,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41029] to rsgroup master 2023-07-13 15:16:36,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:36,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35058 deadline: 1689262596159, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. 2023-07-13 15:16:36,160 WARN [Listener at localhost/34653] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:36,162 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:36,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,164 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35281, jenkins-hbase4.apache.org:40227, jenkins-hbase4.apache.org:43281, jenkins-hbase4.apache.org:44715], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:36,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:36,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:36,188 INFO [Listener at localhost/34653] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=576 (was 574) - Thread LEAK? -, OpenFileDescriptor=844 (was 844), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=401 (was 401), ProcessCount=172 (was 172), AvailableMemoryMB=3876 (was 3881) 2023-07-13 15:16:36,189 WARN [Listener at localhost/34653] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-13 15:16:36,207 INFO [Listener at localhost/34653] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576, OpenFileDescriptor=844, MaxFileDescriptor=60000, SystemLoadAverage=401, ProcessCount=172, AvailableMemoryMB=3875 2023-07-13 15:16:36,207 WARN [Listener at localhost/34653] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-13 15:16:36,207 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-13 15:16:36,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:36,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:36,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:36,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:36,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:36,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:36,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:36,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:36,222 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:36,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:36,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:36,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:36,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41029] to rsgroup master 2023-07-13 15:16:36,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:36,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35058 deadline: 1689262596231, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. 2023-07-13 15:16:36,232 WARN [Listener at localhost/34653] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:36,234 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:36,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,235 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35281, jenkins-hbase4.apache.org:40227, jenkins-hbase4.apache.org:43281, jenkins-hbase4.apache.org:44715], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:36,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:36,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:36,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:36,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:36,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:36,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:36,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:36,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:36,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:36,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:36,252 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:36,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:36,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:36,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:36,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41029] to rsgroup master 2023-07-13 15:16:36,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:36,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35058 deadline: 1689262596262, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. 2023-07-13 15:16:36,263 WARN [Listener at localhost/34653] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:36,265 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:36,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,266 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35281, jenkins-hbase4.apache.org:40227, jenkins-hbase4.apache.org:43281, jenkins-hbase4.apache.org:44715], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:36,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:36,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:36,300 INFO [Listener at localhost/34653] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=577 (was 576) - Thread LEAK? -, OpenFileDescriptor=844 (was 844), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=401 (was 401), ProcessCount=172 (was 172), AvailableMemoryMB=3873 (was 3875) 2023-07-13 15:16:36,300 WARN [Listener at localhost/34653] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-13 15:16:36,323 INFO [Listener at localhost/34653] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=577, OpenFileDescriptor=844, MaxFileDescriptor=60000, SystemLoadAverage=401, ProcessCount=172, AvailableMemoryMB=3871 2023-07-13 15:16:36,323 WARN [Listener at localhost/34653] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-13 15:16:36,324 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-13 15:16:36,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:36,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:36,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:36,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:36,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:36,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:36,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:36,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:36,337 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:36,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:36,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:36,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:36,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41029] to rsgroup master 2023-07-13 15:16:36,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:36,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35058 deadline: 1689262596349, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. 2023-07-13 15:16:36,349 WARN [Listener at localhost/34653] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:36,351 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:36,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,352 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35281, jenkins-hbase4.apache.org:40227, jenkins-hbase4.apache.org:43281, jenkins-hbase4.apache.org:44715], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:36,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:36,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:36,353 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-13 15:16:36,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-13 15:16:36,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-13 15:16:36,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:36,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:36,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-13 15:16:36,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:36,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 15:16:36,371 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:36,373 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-13 15:16:36,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 15:16:36,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-13 15:16:36,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:36,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:35058 deadline: 1689262596468, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-13 15:16:36,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-13 15:16:36,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:36,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 15:16:36,489 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-13 15:16:36,490 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-13 15:16:36,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 15:16:36,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-13 15:16:36,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-13 15:16:36,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-13 15:16:36,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 15:16:36,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:36,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-13 15:16:36,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:36,605 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:36,607 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:36,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-13 15:16:36,609 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:36,610 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-13 15:16:36,610 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:36,610 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:36,612 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:36,613 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-13 15:16:36,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-13 15:16:36,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-13 15:16:36,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-13 15:16:36,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-13 15:16:36,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:36,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:36,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:35058 deadline: 1689261456719, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-13 15:16:36,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:36,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:36,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:36,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:36,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:36,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-13 15:16:36,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:36,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:36,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:36,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:36,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:36,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:36,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:36,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:36,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:36,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:36,736 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:36,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:36,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:36,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:36,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:36,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:36,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41029] to rsgroup master 2023-07-13 15:16:36,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:36,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35058 deadline: 1689262596744, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. 2023-07-13 15:16:36,745 WARN [Listener at localhost/34653] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41029 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:36,747 INFO [Listener at localhost/34653] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:36,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:36,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:36,748 INFO [Listener at localhost/34653] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35281, jenkins-hbase4.apache.org:40227, jenkins-hbase4.apache.org:43281, jenkins-hbase4.apache.org:44715], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:36,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:36,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41029] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:36,766 INFO [Listener at localhost/34653] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=577 (was 577), OpenFileDescriptor=844 (was 844), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=401 (was 401), ProcessCount=172 (was 172), AvailableMemoryMB=3871 (was 3871) 2023-07-13 15:16:36,766 WARN [Listener at localhost/34653] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-13 15:16:36,766 INFO [Listener at localhost/34653] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 15:16:36,766 INFO [Listener at localhost/34653] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 15:16:36,766 DEBUG [Listener at localhost/34653] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4875d50c to 127.0.0.1:54390 2023-07-13 15:16:36,766 DEBUG [Listener at localhost/34653] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:36,766 DEBUG [Listener at localhost/34653] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 15:16:36,766 DEBUG [Listener at localhost/34653] util.JVMClusterUtil(257): Found active master hash=329339120, stopped=false 2023-07-13 15:16:36,766 DEBUG [Listener at localhost/34653] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:36,766 DEBUG [Listener at localhost/34653] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:36,766 INFO [Listener at localhost/34653] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:36,769 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:36,769 INFO [Listener at localhost/34653] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 15:16:36,769 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:36,769 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:36,769 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:36,769 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:36,769 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:36,769 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:36,769 DEBUG [Listener at localhost/34653] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x40c8b085 to 127.0.0.1:54390 2023-07-13 15:16:36,769 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:36,769 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:36,769 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:36,770 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:36,769 DEBUG [Listener at localhost/34653] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:36,770 INFO [Listener at localhost/34653] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35281,1689261392876' ***** 2023-07-13 15:16:36,770 INFO [Listener at localhost/34653] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:36,770 INFO [Listener at localhost/34653] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40227,1689261393033' ***** 2023-07-13 15:16:36,770 INFO [Listener at localhost/34653] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:36,770 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:36,770 INFO [Listener at localhost/34653] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44715,1689261393186' ***** 2023-07-13 15:16:36,770 INFO [Listener at localhost/34653] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:36,770 INFO [Listener at localhost/34653] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43281,1689261394504' ***** 2023-07-13 15:16:36,771 INFO [Listener at localhost/34653] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:36,770 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:36,771 INFO [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:36,771 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-13 15:16:36,770 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:36,774 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-13 15:16:36,778 INFO [RS:0;jenkins-hbase4:35281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5d16009b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:36,778 INFO [RS:2;jenkins-hbase4:44715] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@72c3dca2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:36,778 INFO [RS:0;jenkins-hbase4:35281] server.AbstractConnector(383): Stopped ServerConnector@6ba4dce4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:36,778 INFO [RS:1;jenkins-hbase4:40227] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1f4b851f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:36,778 INFO [RS:3;jenkins-hbase4:43281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3e8cf2fc{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:36,778 INFO [RS:2;jenkins-hbase4:44715] server.AbstractConnector(383): Stopped ServerConnector@72f6fde0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:36,778 INFO [RS:0;jenkins-hbase4:35281] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:36,779 INFO [RS:2;jenkins-hbase4:44715] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:36,779 INFO [RS:3;jenkins-hbase4:43281] server.AbstractConnector(383): Stopped ServerConnector@134f4aae{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:36,780 INFO [RS:0;jenkins-hbase4:35281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@295166fc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:36,779 INFO [RS:1;jenkins-hbase4:40227] server.AbstractConnector(383): Stopped ServerConnector@6d04da20{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:36,781 INFO [RS:0;jenkins-hbase4:35281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@75e5d650{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:36,780 INFO [RS:2;jenkins-hbase4:44715] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@736db136{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:36,780 INFO [RS:3;jenkins-hbase4:43281] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:36,782 INFO [RS:2;jenkins-hbase4:44715] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@51b99b82{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:36,782 INFO [RS:0;jenkins-hbase4:35281] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:36,781 INFO [RS:1;jenkins-hbase4:40227] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:36,783 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:36,783 INFO [RS:0;jenkins-hbase4:35281] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:36,783 INFO [RS:3;jenkins-hbase4:43281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d37a2b0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:36,784 INFO [RS:2;jenkins-hbase4:44715] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:36,784 INFO [RS:0;jenkins-hbase4:35281] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:36,784 INFO [RS:1;jenkins-hbase4:40227] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@42cd0009{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:36,785 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(3305): Received CLOSE for 33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:36,785 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:36,785 INFO [RS:2;jenkins-hbase4:44715] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:36,785 INFO [RS:3;jenkins-hbase4:43281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d3c5a39{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:36,786 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:36,786 INFO [RS:2;jenkins-hbase4:44715] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:36,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 33adbfeda53f12cfeeea717c33fa723a, disabling compactions & flushes 2023-07-13 15:16:36,786 INFO [RS:1;jenkins-hbase4:40227] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@288689f8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:36,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:36,786 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:36,786 DEBUG [RS:2;jenkins-hbase4:44715] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4e929c0a to 127.0.0.1:54390 2023-07-13 15:16:36,786 DEBUG [RS:2;jenkins-hbase4:44715] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:36,786 INFO [RS:2;jenkins-hbase4:44715] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:36,787 INFO [RS:2;jenkins-hbase4:44715] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:36,787 INFO [RS:2;jenkins-hbase4:44715] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:36,787 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 15:16:36,787 INFO [RS:3;jenkins-hbase4:43281] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:36,786 DEBUG [RS:0;jenkins-hbase4:35281] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x02803a6e to 127.0.0.1:54390 2023-07-13 15:16:36,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:36,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. after waiting 0 ms 2023-07-13 15:16:36,787 DEBUG [RS:0;jenkins-hbase4:35281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:36,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:36,787 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 15:16:36,787 DEBUG [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1478): Online Regions={33adbfeda53f12cfeeea717c33fa723a=hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a.} 2023-07-13 15:16:36,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 33adbfeda53f12cfeeea717c33fa723a 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-13 15:16:36,787 DEBUG [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1504): Waiting on 33adbfeda53f12cfeeea717c33fa723a 2023-07-13 15:16:36,787 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 15:16:36,787 DEBUG [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-13 15:16:36,787 DEBUG [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-13 15:16:36,787 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:36,787 INFO [RS:3;jenkins-hbase4:43281] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:36,788 INFO [RS:3;jenkins-hbase4:43281] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:36,788 INFO [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:36,788 DEBUG [RS:3;jenkins-hbase4:43281] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x139f189c to 127.0.0.1:54390 2023-07-13 15:16:36,788 DEBUG [RS:3;jenkins-hbase4:43281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:36,788 INFO [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43281,1689261394504; all regions closed. 2023-07-13 15:16:36,788 INFO [RS:1;jenkins-hbase4:40227] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:36,788 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:36,788 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:36,788 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:36,788 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:36,788 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:36,789 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:36,789 INFO [RS:1;jenkins-hbase4:40227] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:36,789 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-13 15:16:36,789 INFO [RS:1;jenkins-hbase4:40227] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:36,789 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(3305): Received CLOSE for a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:36,789 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:36,789 DEBUG [RS:1;jenkins-hbase4:40227] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x08ead63f to 127.0.0.1:54390 2023-07-13 15:16:36,790 DEBUG [RS:1;jenkins-hbase4:40227] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:36,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a934e69702da3551c62dbdada49afb86, disabling compactions & flushes 2023-07-13 15:16:36,790 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 15:16:36,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:36,790 DEBUG [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1478): Online Regions={a934e69702da3551c62dbdada49afb86=hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86.} 2023-07-13 15:16:36,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:36,790 DEBUG [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1504): Waiting on a934e69702da3551c62dbdada49afb86 2023-07-13 15:16:36,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. after waiting 0 ms 2023-07-13 15:16:36,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:36,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a934e69702da3551c62dbdada49afb86 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-13 15:16:36,796 DEBUG [RS:3;jenkins-hbase4:43281] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs 2023-07-13 15:16:36,796 INFO [RS:3;jenkins-hbase4:43281] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43281%2C1689261394504:(num 1689261394843) 2023-07-13 15:16:36,796 DEBUG [RS:3;jenkins-hbase4:43281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:36,796 INFO [RS:3;jenkins-hbase4:43281] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:36,797 INFO [RS:3;jenkins-hbase4:43281] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:36,797 INFO [RS:3;jenkins-hbase4:43281] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:36,797 INFO [RS:3;jenkins-hbase4:43281] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:36,797 INFO [RS:3;jenkins-hbase4:43281] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:36,797 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:36,799 INFO [RS:3;jenkins-hbase4:43281] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43281 2023-07-13 15:16:36,812 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:36,821 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/.tmp/info/5e6bd5b3599a4e29986facfae65ad148 2023-07-13 15:16:36,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86/.tmp/m/371cce4be317462f889eb0bd383f2a95 2023-07-13 15:16:36,824 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a/.tmp/info/27c3695e5d7541e48b6ddecaf527d229 2023-07-13 15:16:36,827 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:36,830 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:36,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 371cce4be317462f889eb0bd383f2a95 2023-07-13 15:16:36,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86/.tmp/m/371cce4be317462f889eb0bd383f2a95 as hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86/m/371cce4be317462f889eb0bd383f2a95 2023-07-13 15:16:36,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 27c3695e5d7541e48b6ddecaf527d229 2023-07-13 15:16:36,838 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a/.tmp/info/27c3695e5d7541e48b6ddecaf527d229 as hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a/info/27c3695e5d7541e48b6ddecaf527d229 2023-07-13 15:16:36,839 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e6bd5b3599a4e29986facfae65ad148 2023-07-13 15:16:36,839 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:36,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 371cce4be317462f889eb0bd383f2a95 2023-07-13 15:16:36,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86/m/371cce4be317462f889eb0bd383f2a95, entries=12, sequenceid=29, filesize=5.4 K 2023-07-13 15:16:36,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 27c3695e5d7541e48b6ddecaf527d229 2023-07-13 15:16:36,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for a934e69702da3551c62dbdada49afb86 in 56ms, sequenceid=29, compaction requested=false 2023-07-13 15:16:36,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a/info/27c3695e5d7541e48b6ddecaf527d229, entries=3, sequenceid=9, filesize=5.0 K 2023-07-13 15:16:36,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 33adbfeda53f12cfeeea717c33fa723a in 62ms, sequenceid=9, compaction requested=false 2023-07-13 15:16:36,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/rsgroup/a934e69702da3551c62dbdada49afb86/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-13 15:16:36,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/namespace/33adbfeda53f12cfeeea717c33fa723a/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-13 15:16:36,863 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/.tmp/rep_barrier/65dbf96365c541e289172fdf2b74425e 2023-07-13 15:16:36,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:36,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 33adbfeda53f12cfeeea717c33fa723a: 2023-07-13 15:16:36,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689261393966.33adbfeda53f12cfeeea717c33fa723a. 2023-07-13 15:16:36,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:36,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:36,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a934e69702da3551c62dbdada49afb86: 2023-07-13 15:16:36,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689261394104.a934e69702da3551c62dbdada49afb86. 2023-07-13 15:16:36,869 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 65dbf96365c541e289172fdf2b74425e 2023-07-13 15:16:36,887 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/.tmp/table/ab872fa501bf4df69dde7613d3900d31 2023-07-13 15:16:36,893 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ab872fa501bf4df69dde7613d3900d31 2023-07-13 15:16:36,893 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:36,893 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:36,893 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:36,893 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:36,893 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:36,893 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43281,1689261394504 2023-07-13 15:16:36,893 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:36,893 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:36,893 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:36,894 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43281,1689261394504] 2023-07-13 15:16:36,894 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43281,1689261394504; numProcessing=1 2023-07-13 15:16:36,894 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/.tmp/info/5e6bd5b3599a4e29986facfae65ad148 as hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/info/5e6bd5b3599a4e29986facfae65ad148 2023-07-13 15:16:36,896 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43281,1689261394504 already deleted, retry=false 2023-07-13 15:16:36,896 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43281,1689261394504 expired; onlineServers=3 2023-07-13 15:16:36,900 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e6bd5b3599a4e29986facfae65ad148 2023-07-13 15:16:36,901 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/info/5e6bd5b3599a4e29986facfae65ad148, entries=22, sequenceid=26, filesize=7.3 K 2023-07-13 15:16:36,902 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/.tmp/rep_barrier/65dbf96365c541e289172fdf2b74425e as hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/rep_barrier/65dbf96365c541e289172fdf2b74425e 2023-07-13 15:16:36,907 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 65dbf96365c541e289172fdf2b74425e 2023-07-13 15:16:36,908 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/rep_barrier/65dbf96365c541e289172fdf2b74425e, entries=1, sequenceid=26, filesize=4.9 K 2023-07-13 15:16:36,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/.tmp/table/ab872fa501bf4df69dde7613d3900d31 as hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/table/ab872fa501bf4df69dde7613d3900d31 2023-07-13 15:16:36,914 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ab872fa501bf4df69dde7613d3900d31 2023-07-13 15:16:36,914 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/table/ab872fa501bf4df69dde7613d3900d31, entries=6, sequenceid=26, filesize=5.1 K 2023-07-13 15:16:36,915 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 126ms, sequenceid=26, compaction requested=false 2023-07-13 15:16:36,925 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-13 15:16:36,925 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:36,926 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:36,926 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:36,926 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:36,987 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35281,1689261392876; all regions closed. 2023-07-13 15:16:36,988 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44715,1689261393186; all regions closed. 2023-07-13 15:16:36,990 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40227,1689261393033; all regions closed. 2023-07-13 15:16:36,992 DEBUG [RS:0;jenkins-hbase4:35281] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs 2023-07-13 15:16:36,992 INFO [RS:0;jenkins-hbase4:35281] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35281%2C1689261392876:(num 1689261393861) 2023-07-13 15:16:36,992 DEBUG [RS:0;jenkins-hbase4:35281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:36,993 INFO [RS:0;jenkins-hbase4:35281] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:36,995 INFO [RS:0;jenkins-hbase4:35281] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:36,995 INFO [RS:0;jenkins-hbase4:35281] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:36,995 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:36,995 INFO [RS:0;jenkins-hbase4:35281] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:36,995 INFO [RS:0;jenkins-hbase4:35281] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:36,996 INFO [RS:0;jenkins-hbase4:35281] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35281 2023-07-13 15:16:36,999 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:36,999 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:36,999 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:36,999 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35281,1689261392876 2023-07-13 15:16:36,999 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35281,1689261392876] 2023-07-13 15:16:36,999 DEBUG [RS:2;jenkins-hbase4:44715] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs 2023-07-13 15:16:36,999 INFO [RS:2;jenkins-hbase4:44715] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44715%2C1689261393186.meta:.meta(num 1689261393911) 2023-07-13 15:16:36,999 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35281,1689261392876; numProcessing=2 2023-07-13 15:16:37,002 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35281,1689261392876 already deleted, retry=false 2023-07-13 15:16:37,002 DEBUG [RS:1;jenkins-hbase4:40227] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs 2023-07-13 15:16:37,002 INFO [RS:1;jenkins-hbase4:40227] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40227%2C1689261393033:(num 1689261393859) 2023-07-13 15:16:37,002 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35281,1689261392876 expired; onlineServers=2 2023-07-13 15:16:37,002 DEBUG [RS:1;jenkins-hbase4:40227] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:37,003 INFO [RS:1;jenkins-hbase4:40227] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:37,003 INFO [RS:1;jenkins-hbase4:40227] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:37,003 INFO [RS:1;jenkins-hbase4:40227] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:37,003 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:37,003 INFO [RS:1;jenkins-hbase4:40227] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:37,003 INFO [RS:1;jenkins-hbase4:40227] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:37,004 INFO [RS:1;jenkins-hbase4:40227] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40227 2023-07-13 15:16:37,005 DEBUG [RS:2;jenkins-hbase4:44715] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/oldWALs 2023-07-13 15:16:37,005 INFO [RS:2;jenkins-hbase4:44715] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44715%2C1689261393186:(num 1689261393867) 2023-07-13 15:16:37,005 DEBUG [RS:2;jenkins-hbase4:44715] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:37,005 INFO [RS:2;jenkins-hbase4:44715] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:37,005 INFO [RS:2;jenkins-hbase4:44715] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:37,005 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:37,006 INFO [RS:2;jenkins-hbase4:44715] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44715 2023-07-13 15:16:37,102 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,102 INFO [RS:0;jenkins-hbase4:35281] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35281,1689261392876; zookeeper connection closed. 2023-07-13 15:16:37,102 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:35281-0x1015f41c6280001, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,102 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@509c97c5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@509c97c5 2023-07-13 15:16:37,103 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:37,103 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:37,103 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:37,103 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40227,1689261393033 2023-07-13 15:16:37,104 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44715,1689261393186 2023-07-13 15:16:37,105 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40227,1689261393033] 2023-07-13 15:16:37,105 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40227,1689261393033; numProcessing=3 2023-07-13 15:16:37,107 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40227,1689261393033 already deleted, retry=false 2023-07-13 15:16:37,107 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40227,1689261393033 expired; onlineServers=1 2023-07-13 15:16:37,107 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44715,1689261393186] 2023-07-13 15:16:37,107 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44715,1689261393186; numProcessing=4 2023-07-13 15:16:37,109 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44715,1689261393186 already deleted, retry=false 2023-07-13 15:16:37,109 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44715,1689261393186 expired; onlineServers=0 2023-07-13 15:16:37,109 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41029,1689261392701' ***** 2023-07-13 15:16:37,109 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 15:16:37,110 DEBUG [M:0;jenkins-hbase4:41029] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@179ed65c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:37,110 INFO [M:0;jenkins-hbase4:41029] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:37,113 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:37,113 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:37,113 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:37,113 INFO [M:0;jenkins-hbase4:41029] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@21d993e9{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:37,113 INFO [M:0;jenkins-hbase4:41029] server.AbstractConnector(383): Stopped ServerConnector@50e3498a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:37,114 INFO [M:0;jenkins-hbase4:41029] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:37,114 INFO [M:0;jenkins-hbase4:41029] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ce8454f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:37,115 INFO [M:0;jenkins-hbase4:41029] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@558dcba1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:37,115 INFO [M:0;jenkins-hbase4:41029] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41029,1689261392701 2023-07-13 15:16:37,115 INFO [M:0;jenkins-hbase4:41029] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41029,1689261392701; all regions closed. 2023-07-13 15:16:37,115 DEBUG [M:0;jenkins-hbase4:41029] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:37,115 INFO [M:0;jenkins-hbase4:41029] master.HMaster(1491): Stopping master jetty server 2023-07-13 15:16:37,116 INFO [M:0;jenkins-hbase4:41029] server.AbstractConnector(383): Stopped ServerConnector@617ef308{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:37,116 DEBUG [M:0;jenkins-hbase4:41029] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 15:16:37,116 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 15:16:37,116 DEBUG [M:0;jenkins-hbase4:41029] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 15:16:37,117 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261393499] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261393499,5,FailOnTimeoutGroup] 2023-07-13 15:16:37,117 INFO [M:0;jenkins-hbase4:41029] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 15:16:37,117 INFO [M:0;jenkins-hbase4:41029] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 15:16:37,116 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261393499] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261393499,5,FailOnTimeoutGroup] 2023-07-13 15:16:37,117 INFO [M:0;jenkins-hbase4:41029] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-13 15:16:37,117 DEBUG [M:0;jenkins-hbase4:41029] master.HMaster(1512): Stopping service threads 2023-07-13 15:16:37,117 INFO [M:0;jenkins-hbase4:41029] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 15:16:37,117 ERROR [M:0;jenkins-hbase4:41029] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-13 15:16:37,117 INFO [M:0;jenkins-hbase4:41029] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 15:16:37,117 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 15:16:37,118 DEBUG [M:0;jenkins-hbase4:41029] zookeeper.ZKUtil(398): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 15:16:37,118 WARN [M:0;jenkins-hbase4:41029] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 15:16:37,118 INFO [M:0;jenkins-hbase4:41029] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 15:16:37,118 INFO [M:0;jenkins-hbase4:41029] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 15:16:37,118 DEBUG [M:0;jenkins-hbase4:41029] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:16:37,118 INFO [M:0;jenkins-hbase4:41029] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:37,118 DEBUG [M:0;jenkins-hbase4:41029] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:37,118 DEBUG [M:0;jenkins-hbase4:41029] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:16:37,118 DEBUG [M:0;jenkins-hbase4:41029] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:37,118 INFO [M:0;jenkins-hbase4:41029] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.17 KB heapSize=90.63 KB 2023-07-13 15:16:37,129 INFO [M:0;jenkins-hbase4:41029] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.17 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9cdd25e5198048bfa6044bc2c3a9e4d1 2023-07-13 15:16:37,135 DEBUG [M:0;jenkins-hbase4:41029] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9cdd25e5198048bfa6044bc2c3a9e4d1 as hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9cdd25e5198048bfa6044bc2c3a9e4d1 2023-07-13 15:16:37,139 INFO [M:0;jenkins-hbase4:41029] regionserver.HStore(1080): Added hdfs://localhost:45993/user/jenkins/test-data/2e5c92e1-6f7c-28cb-ef8f-764231f4959a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9cdd25e5198048bfa6044bc2c3a9e4d1, entries=22, sequenceid=175, filesize=11.1 K 2023-07-13 15:16:37,140 INFO [M:0;jenkins-hbase4:41029] regionserver.HRegion(2948): Finished flush of dataSize ~76.17 KB/77999, heapSize ~90.62 KB/92792, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=175, compaction requested=false 2023-07-13 15:16:37,142 INFO [M:0;jenkins-hbase4:41029] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:37,142 DEBUG [M:0;jenkins-hbase4:41029] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:37,144 INFO [M:0;jenkins-hbase4:41029] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 15:16:37,144 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:37,145 INFO [M:0;jenkins-hbase4:41029] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41029 2023-07-13 15:16:37,148 DEBUG [M:0;jenkins-hbase4:41029] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41029,1689261392701 already deleted, retry=false 2023-07-13 15:16:37,369 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,369 INFO [M:0;jenkins-hbase4:41029] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41029,1689261392701; zookeeper connection closed. 2023-07-13 15:16:37,369 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): master:41029-0x1015f41c6280000, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,469 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,469 INFO [RS:1;jenkins-hbase4:40227] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40227,1689261393033; zookeeper connection closed. 2023-07-13 15:16:37,469 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:40227-0x1015f41c6280002, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,469 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@d50c982] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@d50c982 2023-07-13 15:16:37,569 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,569 INFO [RS:2;jenkins-hbase4:44715] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44715,1689261393186; zookeeper connection closed. 2023-07-13 15:16:37,569 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:44715-0x1015f41c6280003, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,570 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@63dd53f4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@63dd53f4 2023-07-13 15:16:37,670 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,670 DEBUG [Listener at localhost/34653-EventThread] zookeeper.ZKWatcher(600): regionserver:43281-0x1015f41c628000b, quorum=127.0.0.1:54390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:37,670 INFO [RS:3;jenkins-hbase4:43281] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43281,1689261394504; zookeeper connection closed. 2023-07-13 15:16:37,670 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@31e90b2d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@31e90b2d 2023-07-13 15:16:37,670 INFO [Listener at localhost/34653] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-13 15:16:37,670 WARN [Listener at localhost/34653] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:37,674 INFO [Listener at localhost/34653] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:37,776 WARN [BP-1975456767-172.31.14.131-1689261391921 heartbeating to localhost/127.0.0.1:45993] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:37,776 WARN [BP-1975456767-172.31.14.131-1689261391921 heartbeating to localhost/127.0.0.1:45993] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1975456767-172.31.14.131-1689261391921 (Datanode Uuid 2b56587d-1107-4143-a9f6-48517e5dc2eb) service to localhost/127.0.0.1:45993 2023-07-13 15:16:37,777 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data5/current/BP-1975456767-172.31.14.131-1689261391921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:37,777 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data6/current/BP-1975456767-172.31.14.131-1689261391921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:37,778 WARN [Listener at localhost/34653] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:37,781 INFO [Listener at localhost/34653] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:37,883 WARN [BP-1975456767-172.31.14.131-1689261391921 heartbeating to localhost/127.0.0.1:45993] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:37,883 WARN [BP-1975456767-172.31.14.131-1689261391921 heartbeating to localhost/127.0.0.1:45993] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1975456767-172.31.14.131-1689261391921 (Datanode Uuid 63faf62a-acb0-4cf6-9141-36c70b4dfcf1) service to localhost/127.0.0.1:45993 2023-07-13 15:16:37,884 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data3/current/BP-1975456767-172.31.14.131-1689261391921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:37,884 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data4/current/BP-1975456767-172.31.14.131-1689261391921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:37,885 WARN [Listener at localhost/34653] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:37,888 INFO [Listener at localhost/34653] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:37,991 WARN [BP-1975456767-172.31.14.131-1689261391921 heartbeating to localhost/127.0.0.1:45993] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:37,991 WARN [BP-1975456767-172.31.14.131-1689261391921 heartbeating to localhost/127.0.0.1:45993] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1975456767-172.31.14.131-1689261391921 (Datanode Uuid 89fbb8c5-bfa4-468c-904e-5d6b7588ce61) service to localhost/127.0.0.1:45993 2023-07-13 15:16:37,992 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data1/current/BP-1975456767-172.31.14.131-1689261391921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:37,992 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6decc97a-917f-5bf5-ccce-1990e1a04162/cluster_1d2b012a-4d46-fa01-7ffd-64bc913238f5/dfs/data/data2/current/BP-1975456767-172.31.14.131-1689261391921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:38,000 INFO [Listener at localhost/34653] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:38,114 INFO [Listener at localhost/34653] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 15:16:38,140 INFO [Listener at localhost/34653] hbase.HBaseTestingUtility(1293): Minicluster is down